Remove Top or Bottom Rows Automatically

Remove Top or Bottom Rows Automatically

Dynam­i­cal­ly remov­ing top and bot­tom rows in Pow­er Query can sig­nif­i­cant­ly stream­line your data clean­ing process when deal­ing with incon­sis­tent data imports. This tech­nique allows you to auto­mat­i­cal­ly elim­i­nate unnec­es­sary head­er rows or foot­er infor­ma­tion based on spe­cif­ic con­di­tions rather than fixed row counts, mak­ing your data trans­for­ma­tion process more robust and adapt­able to chang­ing source files.

The abil­i­ty to use con­di­tions rather than sta­t­ic num­bers is par­tic­u­lar­ly valu­able when work­ing with reg­u­lar­ly updat­ed reports that may con­tain vary­ing amounts of meta­da­ta or sum­ma­ry information.

In my video, I demon­strate how to imple­ment this dynam­ic row removal tech­nique that I learned at an Excel Lon­don Meetup:

Understanding the Problem with Static Row Removal

When import­ing data from text files, CSV files, or oth­er sources, you often encounter extra­ne­ous infor­ma­tion at the top and bot­tom of your dataset. These might include title rows, explana­to­ry notes, sum­ma­ry sta­tis­tics, or foot­er infor­ma­tion that aren’t part of the actu­al data you need to ana­lyze. Using Pow­er Query’s stan­dard “Remove Top Rows” or “Remove Bot­tom Rows” func­tions with a fixed num­ber works fine when your source data struc­ture nev­er changes, but becomes prob­lem­at­ic when the num­ber of these rows varies.

In the exam­ple shown in my video, we have mul­ti­ple rows of meta­da­ta before the actu­al head­er row (which con­tains “Date” as the first col­umn val­ue), and sev­er­al rows of addi­tion­al infor­ma­tion at the bot­tom of the data that need to be removed. There are also some miss­ing val­ues through­out the data that make sim­ple fil­ter­ing ineffective.

Dynamically Removing Top Rows Based on Conditions

The con­ven­tion­al approach to remov­ing top rows in Pow­er Query involves spec­i­fy­ing a fixed num­ber. How­ev­er, this can be prob­lem­at­ic when the num­ber of head­er rows changes. The dynam­ic solu­tion involves using a con­di­tion rather than a fixed count.

Here’s how to imple­ment this technique:

  1. Go to the Home tab in Pow­er Query Editor
  2. Select “Remove Rows” and then “Remove Top Rows”
  3. Instead of enter­ing a num­ber, mod­i­fy the for­mu­la to use the “each” key­word fol­lowed by a condition

The for­mu­la will look some­thing like this: each [Column1] <> “Date” (assum­ing “Date” is the head­er text in your first col­umn). This tells Pow­er Query to keep remov­ing rows until it finds a row where the first col­umn con­tains the text “Date”.

The Technical Details of the Table.Skip Function

Behind the scenes, Pow­er Query uses the Table.Skip func­tion when you remove top rows. This func­tion has a hid­den capa­bil­i­ty that isn’t obvi­ous from the user inter­face — it can accept either a count or a con­di­tion parameter.

When using a con­di­tion, the syn­tax changes from sim­ply pro­vid­ing a num­ber to using the format:

each [condition]

The con­di­tion is eval­u­at­ed for each row, start­ing from the top, and rows are removed until the con­di­tion is no longer true. This allows for dynam­ic adap­ta­tion to vary­ing source data structures.

Dynamically Removing Bottom Rows

Sim­i­lar­ly, we can apply the same con­cept to remove rows from the bot­tom of our dataset. This is par­tic­u­lar­ly use­ful when deal­ing with files that con­tain sum­ma­ry infor­ma­tion, notes, or oth­er foot­er data that should be exclud­ed from analysis.

The process for remov­ing bot­tom rows dynam­i­cal­ly is:

  1. Go to the Home tab in Pow­er Query Editor
  2. Select “Remove Rows” and then “Remove Bot­tom Rows”
  3. Replace the sta­t­ic num­ber with a con­di­tion using the “each” keyword

For exam­ple, you might use a for­mu­la like: each [Mer­chant] = “” to remove rows from the bot­tom where the Mer­chant col­umn con­tains an emp­ty text string. Or you might use each [Rev­enue] = null to remove rows where the Rev­enue col­umn con­tains null values.

Handling Different Types of Empty Values

When work­ing with bot­tom rows, it’s impor­tant to under­stand the dif­fer­ent types of emp­ty val­ues that might appear in your data:

  • Emp­ty text strings — rep­re­sent­ed by “” in formulas
  • Null val­ues — rep­re­sent­ed by null in formulas
  • Miss­ing val­ues — which might be null or emp­ty depend­ing on the data source

In the video demon­stra­tion, I show how to han­dle both emp­ty text strings and null val­ues as con­di­tions for remov­ing bot­tom rows. The key is to iden­ti­fy which col­umn and which type of emp­ty val­ue reli­ably indi­cates the foot­er sec­tion of your data.

Practical Applications and Benefits

This dynam­ic row removal tech­nique is par­tic­u­lar­ly valu­able in sev­er­al scenarios:

When work­ing with reg­u­lar­ly updat­ed reports where the struc­ture might change slight­ly between ver­sions, this approach ensures your Pow­er Query solu­tion remains robust. It’s also help­ful when deal­ing with data exports from sys­tems that include vary­ing amounts of meta­da­ta or when con­sol­i­dat­ing mul­ti­ple files that might have dif­fer­ent head­er structures.

The major ben­e­fits include:

  • More resilient data trans­for­ma­tion process­es that don’t break when source for­mats change slightly
  • Reduced need for man­u­al inter­ven­tion when pro­cess­ing new data
  • Abil­i­ty to han­dle files with incon­sis­tent struc­ture automatically
  • Greater flex­i­bil­i­ty com­pared to sta­t­ic row removal or sim­ple filtering

This tech­nique demon­strates the pow­er of M lan­guage in Pow­er Query, allow­ing for solu­tions that go beyond what’s imme­di­ate­ly avail­able in the user inter­face. By under­stand­ing and lever­ag­ing these more advanced capa­bil­i­ties, you can cre­ate more robust data trans­for­ma­tion processes.

Important Considerations

When imple­ment­ing this tech­nique, keep in mind a few impor­tant points:

The con­di­tion you use must reli­ably iden­ti­fy the bound­ary between the rows you want to keep and those you want to remove. Choose col­umn val­ues that are con­sis­tent­ly present (or con­sis­tent­ly absent) at these bound­aries. Also be aware that if your con­di­tion nev­er eval­u­ates to false, you could poten­tial­ly remove all rows from your dataset, so test­ing with rep­re­sen­ta­tive sam­ple data is essential.

Addi­tion­al­ly, remem­ber that this tech­nique works even when you have miss­ing val­ues in your actu­al data. As shown in the video, the rows are only removed when they match the spe­cif­ic con­di­tion you’ve defined, allow­ing rows with some miss­ing val­ues to be retained as long as they don’t match your removal condition.

How to Import Fixed-Width Files to Excel

How to Import Fixed-Width Files to Excel

Import­ing fixed-width col­umn data to Excel can be chal­leng­ing when each col­umn has a set num­ber of char­ac­ters, often result­ing in unnec­es­sary spaces. In this arti­cle, I’ll show you how to prop­er­ly import fixed-width CSV files using Pow­er Query, ensur­ing clean data with­out those prob­lem­at­ic extra spaces.

This tech­nique is essen­tial for han­dling struc­tured data exports from lega­cy sys­tems or stan­dard­ized reports.

Understanding Fixed-Width Column Data

Fixed-width col­umn data is a spe­cif­ic for­mat where each col­umn occu­pies exact­ly the same num­ber of char­ac­ters in every row. Unlike delim­it­ed for­mats (like com­ma-sep­a­rat­ed val­ues), fixed-width files allo­cate a pre­de­ter­mined num­ber of spaces for each field. This means short­er val­ues get padded with spaces to main­tain the fixed width. For exam­ple, the date col­umn might always be 10 char­ac­ters wide, the mer­chant col­umn 20 char­ac­ters, and so on.

This for­mat is com­mon in old­er sys­tems, gov­ern­ment data, and stan­dard­ized reports, but it presents unique chal­lenges when import­ing to Excel. The main issue is that these extra spaces can cause prob­lems with data analy­sis and visu­al­iza­tion if not prop­er­ly han­dled dur­ing the import process.

Importing Fixed-Width Data Using Power Query

Pow­er Query pro­vides excel­lent tools for han­dling fixed-width col­umn data. Here’s the process to prop­er­ly import this type of data:

Initial Import Steps

To begin import­ing fixed-width data, fol­low these steps:

  1. Go to the Data tab in Excel
  2. Click on “From Text/CSV” option
  3. Locate and select your fixed-width file

When the Pow­er Query Edi­tor opens, you might need to adjust the win­dow size to see all avail­able options, as some­times the third col­umn of set­tings may be hid­den if the win­dow is too nar­row. Make sure to expand the win­dow to see all import settings.

Configuring Import Settings

Dur­ing the import process, you’ll need to con­fig­ure sev­er­al impor­tant settings:

First, check the file ori­gin (encod­ing) — UTF‑8 works well for most mod­ern files. Then, ver­i­fy that the delim­iter is set cor­rect­ly. For fixed-width files, Pow­er Query typ­i­cal­ly auto­mat­i­cal­ly detects this for­mat. You should see “Fixed Width” select­ed rather than com­ma, tab, or oth­er delimiters.

The data type detec­tion set­ting is anoth­er cru­cial choice. You have three options:

  • Based on first 200 rows
  • Based on entire dataset
  • Do not detect data types

In most cas­es with fixed-width data, it’s bet­ter to select “Do not detect data types” ini­tial­ly and han­dle data type con­ver­sion after clean­ing the spaces. This pre­vents Pow­er Query from mak­ing assump­tions about your data types that might be affect­ed by the extra spaces.

Handling Extra Spaces in Power Query

The key chal­lenge with fixed-width data is man­ag­ing the extra spaces that pad each col­umn. When exam­in­ing the data in Pow­er Query, you’ll notice these extra spaces in both the col­umn head­ers and text fields. These spaces are nec­es­sary in the orig­i­nal file to main­tain the fixed-width for­mat, but they’re prob­lem­at­ic for analy­sis in Excel.

For exam­ple, when click­ing on a col­umn head­er, you might see some­thing like “Date ” with sev­er­al spaces after the actu­al name. Sim­i­lar­ly, text fields like prod­uct names or mer­chant names will have trail­ing spaces to fill their allo­cat­ed width.

Removing Extra Spaces

To clean up the data properly:

  1. Select all columns in the Pow­er Query Edi­tor by click­ing the first col­umn head­er and then Shift-click­ing the last column
  2. Go to the Trans­form tab
  3. Click on “For­mat” and select “Trim”

This impor­tant step removes all lead­ing and trail­ing white spaces from every cell in your dataset while pre­serv­ing spaces that are actu­al­ly part of your data (like spaces between words in prod­uct names). After trim­ming the data, you’ll notice a sig­nif­i­cant improve­ment in how your data looks.

Setting Headers and Data Types

After remov­ing the extra spaces, you can pro­ceed with con­fig­ur­ing head­ers and data types:

Use the “Use First Row as Head­ers” option (found in the Trans­form tab or by right-click­ing on the first row) to set your head­ers prop­er­ly. Now that the spaces have been removed, your col­umn names will appear clean and professional.

With clean data, you can now set appro­pri­ate data types for each col­umn. For exam­ple, date columns should be for­mat­ted as dates, and rev­enue or cost columns as cur­ren­cy or dec­i­mal num­bers. Pow­er Query will help iden­ti­fy appro­pri­ate types, but you can man­u­al­ly change them by click­ing the data type icon in each col­umn header.

Loading Data to Excel

The final step is to load your cleaned data into Excel:

  1. Click “Close & Load” (or “Close & Load To” for more options)
  2. Select whether to load as a table or to a spe­cif­ic location
  3. Choose an exist­ing work­sheet or cre­ate a new one

When load­ing the data, you have the option to add it to the data mod­el if you plan to use it with Pow­er Piv­ot, but this isn’t nec­es­sary for basic Excel analy­sis. One impor­tant thing to note is that while Pow­er Query may detect cur­ren­cy data types, this does­n’t auto­mat­i­cal­ly apply Excel’s cur­ren­cy for­mat­ting to your table. You’ll need to apply appro­pri­ate num­ber for­mat­ting in Excel after importing.

For­tu­nate­ly, when you refresh your data in the future, Excel will main­tain any num­ber for­mat­ting you’ve applied to the table, mak­ing future updates seamless.

Benefits of Using Power Query for Fixed-Width Data

Using Pow­er Query for import­ing fixed-width data offers sev­er­al advan­tages over oth­er methods:

The abil­i­ty to trans­form data before load­ing it to Excel saves sig­nif­i­cant time com­pared to man­u­al cleanup after import. The process is also repeat­able — if you receive updat­ed ver­sions of the same file, you can sim­ply refresh your query rather than repeat­ing the entire import and cleanup process.

Addi­tion­al­ly, Pow­er Query main­tains a record of all trans­for­ma­tions applied, mak­ing your data prepa­ra­tion process trans­par­ent and eas­i­ly adjustable if need­ed. This approach ensures con­sis­ten­cy in how your fixed-width data is processed each time.

How to Clean one column based on the other column

How to Clean one column based on the other column

Pow­er Query offers pow­er­ful solu­tions for clean­ing data across columns by lever­ag­ing infor­ma­tion con­tained in relat­ed columns. In this arti­cle, I’ll demon­strate how to effi­cient­ly remove cryp­tocur­ren­cy sym­bols from a name col­umn when those sym­bols already exist in a sep­a­rate col­umn, even when the sym­bols have vary­ing lengths.

This tech­nique is par­tic­u­lar­ly valu­able when deal­ing with incon­sis­tent data for­mats that require stan­dard­iza­tion for analysis.

Watch my step-by-step tuto­r­i­al video to see these tech­niques in action:

Understanding the Data Cleaning Challenge

When work­ing with finan­cial data, espe­cial­ly cryp­tocur­ren­cy infor­ma­tion, you’ll often encounter datasets where the same infor­ma­tion appears redun­dant­ly across mul­ti­ple columns. In my exam­ple, I have a table con­tain­ing cryp­tocur­ren­cy data where the name col­umn includes both the cryp­tocur­ren­cy name and its sym­bol (like “Bit­coin BTC”), while a sep­a­rate sym­bol col­umn already con­tains just the sym­bol (“BTC”).

The chal­lenge lies in clean­ing the name col­umn by remov­ing these redun­dant sym­bols. What makes this par­tic­u­lar­ly tricky is that cryp­tocur­ren­cy sym­bols vary in length — some might be three char­ac­ters (BTC), oth­ers four (DOGE) or more. This incon­sis­ten­cy means we can’t sim­ply remove a fixed num­ber of char­ac­ters from the end of each name.

Preparing the Data in Power Query

To begin the clean­ing process, we need to load our data into Pow­er Query where we can apply the nec­es­sary transformations:

  1. Nav­i­gate to the Data tab in Excel
  2. Select your data range and choose to load it to Pow­er Query
  3. Remove any unnec­es­sary columns to focus on the task at hand

In my case, I ini­tial­ly removed sev­er­al columns to focus on just the key columns for this demon­stra­tion — the name col­umn con­tain­ing both cryp­tocur­ren­cy names and sym­bols, and the sym­bol col­umn con­tain­ing just the sym­bols. This sim­pli­fied approach allows us to con­cen­trate specif­i­cal­ly on the clean­ing tech­nique with­out distraction.

Solution #1: Using Text After Delimiter

The first solu­tion uses Pow­er Query’s built-in func­tions to extract the rel­e­vant part of the text. This approach is more straight­for­ward and uses the UI-based options:

To imple­ment this solution:

  1. Select the name col­umn in Pow­er Query
  2. Go to the Add Col­umn tab
  3. Choose Extract and then Text After Delimiter

Ini­tial­ly, when using the UI, you might try enter­ing a spe­cif­ic sym­bol like “BTC” as the delim­iter. How­ev­er, this would only work for one cryp­tocur­ren­cy. The real pow­er comes from mod­i­fy­ing the for­mu­la to use the sym­bol col­umn as a reference.

After cre­at­ing the ini­tial for­mu­la, you’ll need to edit it in the for­mu­la bar to make it dynam­ic. Replace the hard­cod­ed delim­iter text with a ref­er­ence to the sym­bol col­umn. This way, Pow­er Query will use each row’s cor­re­spond­ing sym­bol to extract just the cryp­tocur­ren­cy name, regard­less of the sym­bol­’s length.

Solution #2: Using Text.RemoveRange Function

The sec­ond solu­tion is more com­plex but demon­strates the flex­i­bil­i­ty of Pow­er Query’s M lan­guage. For this approach:

  1. Go to the Add Col­umn tab
  2. Select Cus­tom Column
  3. Use the Text.RemoveRange func­tion to pre­cise­ly remove the sym­bol part

The for­mu­la will look some­thing like:

Text.RemoveRange([name], 0, Text.Length([symbol]))

This func­tion works by remov­ing a spe­cif­ic range of char­ac­ters from the text. It requires three parameters:

  • The source text (our name column)
  • The start­ing posi­tion (0‑based in Pow­er Query)
  • The num­ber of char­ac­ters to remove (which we deter­mine using the length of the symbol)

Remem­ber that Pow­er Query uses zero-based index­ing, mean­ing the first char­ac­ter posi­tion is 0, not 1. This is cru­cial when work­ing with text manip­u­la­tion functions.

Finalizing the Transformation

After apply­ing either solu­tion, we need to clean up our query:

  1. Rename the new­ly cre­at­ed col­umn to some­thing mean­ing­ful (like “name”)
  2. Delete the orig­i­nal name col­umn that con­tained the symbols
  3. If need­ed, restore any pre­vi­ous­ly removed columns by delet­ing the “Removed Columns” step
  4. Arrange columns in the desired order by drag­ging them

In my exam­ple, I also demon­strat­ed how to clean anoth­er col­umn (cir­cu­lat­ing sup­ply) that had a sim­i­lar issue but with a sim­pler pat­tern. Since all the unwant­ed sym­bols in this col­umn were pre­ced­ed by a space, I could use the Extract Text Before Delim­iter func­tion with a space as the delimiter.

Loading the Cleaned Data Back to Excel

Once all trans­for­ma­tions are com­plete, we can load the cleaned data back to Excel:

  1. Go to the Home tab in Pow­er Query
  2. Select Close & Load To
  3. Choose whether to load to a new work­sheet or an exist­ing location

In my demon­stra­tion, I loaded the data to cell A13 on the exist­ing work­sheet, which allows for easy com­par­i­son between the orig­i­nal and cleaned data.

Why This Approach Matters

The beau­ty of using Pow­er Query for this type of data clean­ing is that it cre­ates a repeat­able process. If your data source updates, you can sim­ply refresh the query, and all the clean­ing steps will be reap­plied auto­mat­i­cal­ly. This saves sig­nif­i­cant time com­pared to man­u­al clean­ing or using for­mu­las that would need to be copied down for each new set of data.

Addi­tion­al­ly, by lever­ag­ing infor­ma­tion from one col­umn to clean anoth­er, we cre­ate a more robust solu­tion that can han­dle vari­a­tions in the data for­mat with­out break­ing. This approach is far supe­ri­or to hard-cod­ing spe­cif­ic text val­ues or char­ac­ter counts that might change across dif­fer­ent datasets.

How to Remove Empty Columns in Excel with Power Query (Automated Method)

How to Remove Empty Columns in Excel with Power Query (Automated Method)

In this guide, I’ll show you how to auto­mat­i­cal­ly remove com­plete­ly emp­ty columns in your Excel data using Pow­er Query. This pow­er­ful tech­nique saves you from tedious man­u­al dele­tion and works even when columns con­tain null val­ues, emp­ty cells, and emp­ty text strings (rep­re­sent­ed by “”).

The approach I’ll demon­strate uses a spe­cif­ic Pow­er Query for­mu­la that iden­ti­fies and removes columns with no mean­ing­ful data, great­ly improv­ing your data clean­ing workflow.

Watch my detailed video walk­through of this process:

Understanding Empty Columns in Power Query

When work­ing with data in Excel, we often encounter columns that appear emp­ty but can actu­al­ly con­tain dif­fer­ent types of “empti­ness” that Pow­er Query rec­og­nizes dif­fer­ent­ly. It’s impor­tant to under­stand that Pow­er Query dis­tin­guish­es between null val­ues (tru­ly emp­ty cells), and cells con­tain­ing emp­ty text strings (“”).

An emp­ty text string occurs when a cell con­tains a for­mu­la that returns “” or when you paste such val­ues. These look emp­ty visu­al­ly in Excel, but Pow­er Query treats them dif­fer­ent­ly than cells with null val­ues. Our goal is to cre­ate a solu­tion that iden­ti­fies and removes columns regard­less of which type of “empti­ness” they con­tain, as long as the entire col­umn con­tains no mean­ing­ful data.

Importing Data into Power Query

To begin the process of remov­ing emp­ty columns, we first need to import our data into Pow­er Query. Here’s exam­ple how to do this:

  1. Go to the Data tab in Excel
  2. Click on “Get Data” → “From File” → “From Excel”
  3. Locate and select your Excel file
  4. Choose the work­sheet con­tain­ing your data (in my exam­ple, Sheet1)

After import­ing, you’ll notice that some columns might appear emp­ty but con­tain dif­fer­ent types of empti­ness — some cells might have null val­ues (shown as “null” in Pow­er Query), while oth­ers might be tru­ly emp­ty or con­tain emp­ty text strings. Our code will han­dle all these sce­nar­ios automatically.

The Power Query Solution

Rather than man­u­al­ly delet­ing emp­ty columns, we’ll use a cus­tom for­mu­la in Pow­er Query that auto­mat­i­cal­ly iden­ti­fies and removes all emp­ty columns. This approach is much more effi­cient, espe­cial­ly when work­ing with large datasets.

The Complete Code

Here’s the com­plete Pow­er Query for­mu­la that we’ll use to remove emp­ty columns:

= Table.RemoveColumns(#“Changed Type”, List.Select( Table.ColumnNames(#“Changed Type”), each List.Count( List.RemoveMatchingItems( Table.Column(#“Changed Type”, _), {null, “”} ) ) = 0 ) )

To use this code in your own projects, you’ll need to replace #“Change Type” with the name of your pre­vi­ous step in Pow­er Query. This ref­er­ence appears in three places in the for­mu­la, so be sure to update all instances.

Breaking Down the Formula

Let’s break down this for­mu­la to under­stand how it works:

The for­mu­la starts with Table.RemoveColumns(), which is the func­tion that will even­tu­al­ly remove our emp­ty columns from the table. This func­tion takes two para­me­ters: the table we’re work­ing with, and a list of col­umn names to remove.

To gen­er­ate the list of col­umn names to remove, we use List.Select(), which fil­ters a list based on a con­di­tion. We’re fil­ter­ing the list of all col­umn names (obtained using Table.ColumnNames()) to find those that meet our “emp­ty col­umn” criteria.

The core log­ic hap­pens in this part:

List.Count(List.RemoveMatchingItems(Table.Column(#“Change Type”, _), {“”, null})) = 0

For each col­umn, we:

  1. Extract all val­ues in the col­umn using Table.Column()
  2. Remove any val­ues that are emp­ty strings or null using List.RemoveMatchingItems()
  3. Count how many items remain using List.Count()
  4. Check if this count equals zero (mean­ing the col­umn is com­plete­ly empty)

If the count equals zero, the col­umn is con­sid­ered emp­ty and will be includ­ed in our list of columns to remove.

Implementing the Solution

To imple­ment this solu­tion in your Pow­er Query Editor:

  1. Click “Add Step” to cre­ate a new trans­for­ma­tion step
  2. In the for­mu­la bar, paste the com­plete code shown above
  3. Replace “#“Change Type”” with the name of your pre­vi­ous step (typ­i­cal­ly the step after pro­mot­ing headers)
  4. Press Enter to apply the transformation

You should now see your data with all emp­ty columns removed. The trans­for­ma­tion hap­pens imme­di­ate­ly, leav­ing only columns that con­tain at least one non-emp­ty value.

Testing the Solution

To ver­i­fy that our solu­tion works cor­rect­ly, we can test it with dif­fer­ent types of data. In my video, I demon­strat­ed this by:

1. Start­ing with a dataset that con­tained sev­er­al emp­ty columns (with var­i­ous types of emptiness)

2. Apply­ing our Pow­er Query solu­tion to remove these columns

3. Mod­i­fy­ing the source data to add a new col­umn with a sin­gle value

4. Refresh­ing the query to ver­i­fy that the col­umn with a val­ue is preserved

The solu­tion cor­rect­ly iden­ti­fied and retained columns that had at least one mean­ing­ful val­ue, while remov­ing columns that were com­plete­ly empty.

Advanced Usage and Adaptations

This solu­tion can be eas­i­ly adapt­ed to work with dif­fer­ent datasets. The key is to ensure you’re ref­er­enc­ing the cor­rect pre­vi­ous step in your Pow­er Query trans­for­ma­tion sequence. If your pre­vi­ous step is named dif­fer­ent­ly, sim­ply replace all instances of “#“Change Type”” with the appro­pri­ate step name.

When work­ing with very large datasets, this auto­mat­ed approach can save sig­nif­i­cant time com­pared to man­u­al­ly iden­ti­fy­ing and remov­ing emp­ty columns. Addi­tion­al­ly, the solu­tion is dynam­ic — if your source data changes and a pre­vi­ous­ly emp­ty col­umn now con­tains val­ues, refresh­ing the query will auto­mat­i­cal­ly retain that column.

Loading the Results

After apply­ing our trans­for­ma­tion, we can load the results back to Excel:

  1. Click “Close & Load” (or “Close & Load To…” if you want to spec­i­fy a destination)
  2. Choose your pre­ferred des­ti­na­tion (work­sheet, data mod­el, etc.)
  3. The cleaned data, with emp­ty columns removed, will be loaded to your spec­i­fied location

The result­ing table will con­tain only mean­ing­ful columns that have at least one val­ue. This makes your data more con­cise and eas­i­er to work with.

When your source data changes, sim­ply refresh the query to apply the same trans­for­ma­tions to the updat­ed data. The emp­ty col­umn removal log­ic will be reap­plied auto­mat­i­cal­ly, adapt­ing to any changes in your source data’s structure.