sa f5 quantbudgetint v2

Upload: hajibashir235

Post on 07-Apr-2018

225 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/6/2019 Sa f5 Quantbudgetint v2

    1/19

    STUDENT ACCOUNTANT PAPER F5

    ACCA 2010

    THE USE OF QUANTITATIVE TECHNIQUES IN BUDGETING

    RELEVANT TO ACCA QUALIFI CATION PAP ER F5

    A budget can be defined as a quantified plan relating to a given period.

    Its not surprising, therefore, that there are a number of quantitative

    techniques available to help us prepare a budget. Most of these

    techniques help address forecasting and planning issues such as:

    If I make this quantity, what will costs be?

    How much do we expect to sell in the first quarter of 2011?

    How long will it take people to make these units?

    This article looks at four quantitative techniques:

    the high-low (or range) method

    least squares linear regression

    time series analysis

    learning curves.

    THE HIGH-LOW (OR RANGE) METHOD

    The high-low method is a quick but crude technique. It is usually seen in

    the context of separating fixed and variable costs.

    EXAMPLE 1

    Units produced Total factory

    costs ($)

    Quarter 1 1,000 35,000

    Quarter 2 1,500 45,000

    Quarter 3 2,000 50,000

    Quarter 4 1,800 48,000

    Just by looking at the figures in Example 1 you can see that the costs are

    not purely variable otherwise they would double from Quarter 1 to Quarter

    2 as output doubles. The assumption is that there are also fixed costs in

  • 8/6/2019 Sa f5 Quantbudgetint v2

    2/19

    STUDENT ACCOUNTANT PAPER F5

    ACCA 2010

    each of the four quarters which can be removed by subtracting two pairs

    of total costs.

    The range method looks at the costs incurred at the lowest and highest

    outputs and says that any increment in costs must be purely with the

    result of additional variable costs. So:

    Units Costs ($)

    Highest output 2,000 50,000

    Lowest output 1,000 35,000

    Difference 1,000 15,000

    If the extra 1,000 units are causing the extra $15,000 in costs then the

    variable cost per unit is $15,000/1,000 = $15.

    The other cost in the total costs must be fixed cost, and this can be

    estimated using either of the highest or lowest data sets:

    Lowest output:

    Total costs = $35,000 of which 1,000 x $15 = $15,000 are variable. The

    remaining $20,000 of the total costs must therefore be fixed.

    Or

    Highest output:

    Total costs = $50,000 of which 2,000 x $15 = $30,000 are variable. The

    remaining $20,000 of the total costs must therefore be fixed.

    Armed with this information, we can estimate costs at any level of output.

    For example:

    Output = 1,200 units, costs = $20,000 + 1,200 x $15 = $38,000

  • 8/6/2019 Sa f5 Quantbudgetint v2

    3/19

    STUDENT ACCOUNTANT PAPER F5

    ACCA 2010

    The high-low method is quick and easy but, as said earlier, crude. For

    example, all the data falling between the highest and lowest values are

    ignored.

    You will see here that the predicted costs for output of 1,800 units is

    $20,000 + 1,800 x $15 = $47,000, whereas the actual reading was

    $48,000. No matter how sophisticated the forecasting tool, there will

    always be anomalies between actual and forecast data: factors such as

    efficiency, commodity prices, the weather, can all change unexpectedly

    and cause forecasting anomalies.

    LEAST SQUARES LINEAR REGRESSION

    Linear regression is an objective way of fitting the best possible straight

    line through any set of points.

    Within that sentence lie two important warnings concerning the use of

    linear regression:

    It results in a straight line, even if a curved or kinked line might be

    better.

    It will work for any set of points, so, you could collate peoples ages

    and apartment or house numbers and a linear regression would

    give the best fit possible between where you live and your age (but

    whether you are then expected to move every birthday is not

    clear!).

    So, just because you can perform a least squares regression, it gives you

    no information whatsoever about how much you can rely on the predictive

    power of the result. For that, you also have to calculate the coefficient of

    correlation, or r.

  • 8/6/2019 Sa f5 Quantbudgetint v2

    4/19

  • 8/6/2019 Sa f5 Quantbudgetint v2

    5/19

    STUDENT ACCOUNTANT PAPER F5

    ACCA 2010

    Month Volume

    Costs

    ($)

    1 1,000 8,500

    2 1,200 9,600

    3 1,800 14,000

    4 900 7,000

    5

    2,000

    16,000

    6 400 5,000

    You can see from Figure 1 that when plotted, the data does follow a

    straight line relationship fairly well, and we could draw a fairly accurate

    straight line to represent how cost depended on volume, but linear

    regression takes the subjectivity out of that.

    Month Volume Costs ($)

    x Y xy x2 y2

    1 1,000 8,500 8,500,000 1,000,000 72,250,000

    2 1,200 9,600 11,520,000 1,440,000 92,160,000

    3 1,800 14,000 25,200,000 3,240,000 196,000,000

    4 900 7,000 6,300,000 810,000 49,000,000

    5 2,000 16,000 32,000,000 4,000,000 256,000,000

    6 400 5,000 2,000,000 160,000 25,000,000

    n = 6

    x =

    7,300

    y =

    60,100

    xy =

    85,520,000

    x2 =

    10,650,000

    y2 =

    690,410,000

    b = 6 x 85,520,000 7,300 x 60,100 = 74,390,000 = 7

    6 x 10,650,000 7,3002 10,610,000

  • 8/6/2019 Sa f5 Quantbudgetint v2

    6/19

    STUDENT ACCOUNTANT PAPER F5

    ACCA 2010

    Figure 2

    a = 60,100 - 7 x 7,300 = 1,500

    6 6

    This is interpreted as:

    Total costs (y) = 1,500 + 7 x Volume (x)

    Where $1,500 = fixed cost and $7 = variable cost per unit

    The regression line is shown in Figure 2 (in red), superimposed on the

    original Figure 2 points.

    So, if we were asked to predict costs if for output of 1,500 units, we would

    predict:

    Total costs = 1,500 + 7 x 1,500 = $12,000

    Even though we can see from Figure 2 that the line is a good fit, it is still

    important that the coefficient of correlation is calculated.

    r = 74,390,000

    (6 x 10,650,000 7,300 x 7,300)(6 x 6,410,000 - 60,100 x

    60,100)

    = 0.99

  • 8/6/2019 Sa f5 Quantbudgetint v2

    7/19

    STUDENT ACCOUNTANT PAPER F5

    ACCA 2010

    If r = 1, there is perfect positive correlation (all points fit on the

    regression line) and as one variable increases, so does the other.

    If r = -1, there is perfect negative correlation (all points fit on the

    regression line) and as one variable increases, the other decreases.

    If r = 0, there is no correlation.

    More usefully, r2 is the coefficient of determination, and this can be

    interpreted as the proportion of variation in one variable that is explainedby variation in the other. Here, r2 is 0.98, meaning that there is very good

    association between volume and costs. In other words, the formula

    derived should be good at predicting costs from volume.

    But take heed of these warnings:

    1 A linear regression based on very few points, even with a good

    coefficient of correlation, is not necessarily reliable. In the extreme

    case, if you had only two readings, anyone could draw a straight line

    through them but it would prove nothing.

    2 Good correlation does not prove cause and effect. In Example 2, few

    would argue that producing more units would not cause more costs.

    But what if we were to do a linear regression on advertising and sales

    volume and found a strong correlation coefficient? That does not prove

    that more advertising causes more sales. For example, we might have

    seen the economy recovering so decide to advertise more, but the

    extra sales could have happened anyway as the economy improved.

    3 It is dangerous to extrapolate. What we mean by this is to go outside

    the range of data used in the analysis. In Example 2, we forecast

    costs of $12,000 for output of 1,500 units, which is an example of

    interpolation because we have examined and used data on either sideof 1,500 units. If, however, we were asked to predict costs for an

  • 8/6/2019 Sa f5 Quantbudgetint v2

    8/19

    STUDENT ACCOUNTANT PAPER F5

    ACCA 2010

    output volume of 3,000 units, then we have no experimental data

    which we can use. Applying the formula would imply:

    Total costs = 1,500 + 7 x 3,000 = 22,500

    But how do we know that costs keep behaving in the same way at this

    higher level of output? Fixed costs could step up, variable costs per

    unit could increase, and employees may have to be paid overtime

    rates.

    4. You must remove other known effects before attempting to uncover

    the association between two variables. If, for example, instead of our

    data relating to months 1 6 it related to years 1 6, then other

    effects are likely to interfere. The easiest to correct would be inflation

    effects. Say that inflation runs at the rate of 5% for each year, then,

    before the data is used, we should inflation-adjust all amounts of

    money before attempting the regression. The data would then be:

    Year Volume

    Costs

    ($)

    Inflationadjustment

    Inflationadjusted

    costs ($)

    1 1,000 85,000 x (1.05)5 108,484

    2 1,200 96,000 x (1.05)4 116,689

    3 1,800 140,000 x (1.05)3 162,068

    4 900 70,000 x (1.05)2 77,175

    5 2,000 160,000 x 1.05 168,000

    Now

    6 400 50,000

    Current $ 50,000

    The regression should be carried out on the data in the Volume and

    Inflation-adjusted costs columns.

    Other effects should also be adjusted for, if possible, such as supply

    difficulties affecting one periods costs. Remember, however, that over

  • 8/6/2019 Sa f5 Quantbudgetint v2

    9/19

    STUDENT ACCOUNTANT PAPER F5

    ACCA 2010

    an extended period, the technologies used and products made can

    change and these changes will also have confounding effects on thecost volume relationship.

    5. Always be aware that a linear relationship might not be appropriate.

    The relationship might be better described by a curve or a kinked line.

    TIME SERIES ANALYSIS

    Linear regression assumes that the relationship between two variables is

    strictly linear explained by a straight line. Time series analysis is more

    adaptable and recognises that the following effects could be present:

    1 A trend: this is an underlying smooth increase or decrease of an

    amount as time passes.

    2 Seasonal variations: cycles of variation repeating in less than a year,

    such as spring, summer, autumn and winter, or sales for each day ofthe week.

    3 Cyclical variations: cycles of variation repeating in more than a year,

    typically, the long-term trade cycle.

    4 Random effects: non-repetitive and non-predictable variations.

    Time series analysis investigates the first two of these.

    EXAMPLE 3

    Look at this data: FIGURE 3

    Year Qtr

    Time

    series

    Sales

    $000

    2006 1 1 989.0

    2 2 990.0

    Sales

    Time

  • 8/6/2019 Sa f5 Quantbudgetint v2

    10/19

    STUDENT ACCOUNTANT PAPER F5

    ACCA 2010

    3 3 994.0

    4 4 1,015.0

    2007 1 5 1,030.0

    2 6 1,042.5

    3 7 1,036.0

    4 8 1,056.5

    2008 1 9 1,071.0

    2 10 1,083.5

    3 11 1,079.5

    4 12 1,099.5

    2009 1 13 1,115.52 14 1,127.5

    3 15 1,123.5

    4 16 1,135.0

    2010 1 17 1,140.0

    You can see from Figure 3 that there is some sort of trend (the line

    increases overall) and there are seasonal variations with a dip occurring at

    times 7, 11, and 15, corresponding to the third quarter of each year.

    Quarter 2 tends to look high each year. So, if we are going to try to

    forecast sales for the third quarter of 2010, we would first try to project

    the trend, then superimpose the seasonal effect on to the trend in order

    to decrease it appropriately.

    Performing a time series analysis is rather tedious and it is likely, in any

    exam question, that much of the work will have been done for you leaving

    you to interpret and apply the results. However, for the purposes of

    explanation, we will carry out the full process on this data.

  • 8/6/2019 Sa f5 Quantbudgetint v2

    11/19

    STUDENT ACCOUNTANT PAPER F5

    ACCA 2010

    Year Qtr

    Time

    series

    Raw

    data

    4-part

    moving

    average

    Centred

    moving

    average

    Seasonal

    variation

    (additive)

    Seasonal

    variation

    (multiplicative)

    2006 1 1 989.0

    2 2 990.0

    997.0

    3 3 994.0 1,002.1 -8.1 0.9919

    1,007.3

    4 4 1,015.0 1,013.8 1.2 1.0012

    1,020.4

    2007 1 5 1,030.0 1,025.6 4.4 1.0043

    1,030.9

    2 6 1,042.5 1,036.1 6.4 1.0062

    1,041.3

    3 7 1,036.0 1,046.4 -10.4 0.9901

    1,051.5

    4 8 1,056.5 1,056.6 -0.1 0.9999

    1,061.8

    2008 1 9 1,071.0 1,067.2 3.8 1.0036

    1,072.6

    2 10 1,083.5 1,078.0 5.5 1.0051

    1,083.4

    3 11 1,079.5 1,088.9 -9.4 0.9913

    1,094.5

    4 12 1,099.5 1,100.0 -0.5 0.9995

    1,105.5

    2009 1 13 1,115.5 1,111.0 4.5 1.0041

    1,116.5

    2 14 1,127.5 1,120.9 6.6 1.0059

    1,125.4

    3 15 1,123.5

    1,131.5

    4 16 1,135.0

    2010 1 17 1,140.0

    -8.1 = 994 1002.1

    0.9919 = 994/1002.1

  • 8/6/2019 Sa f5 Quantbudgetint v2

    12/19

    STUDENT ACCOUNTANT PAPER F5

    ACCA 2010

    The first four columns are as before.

    Column five is called 4-part moving average 4-part because we

    believe the data repeats over four seasons. If we thought it repeatedover, say, five days in the week, we would create the 5-part moving

    average.

    The moving average is the average of the four components in the cycle.

    So, in 2006, it is:

    997 = (989 + 990 + 994 + 1015)/4

    Then, moving down one season:

    1,007.3 = (990 + 994 + 1,015 + 1,030)/4

    Progressing down the data, the 4-part moving average contains one

    element from each season. This is really where we can isolate the trend

    because the high season and low season components tend to cancel out.

    The trouble with 4-part moving averages (or any even periodicity) is that

    the moving average is not really opposite any season. To get a figure

    which is centred on a season, adjacent moving averages are themselves

    summed. This is not necessary if we start with, say, five seasons in the

    repetitive cycle.

    Therefore:

    1,002.1 = (997.0 + 1,007.3)/2

    1,013.8 = (1007.3 + 1,020.4)/2

    This data represents the trend line, and if plotted on a graph it would look

    like Figure 4:

  • 8/6/2019 Sa f5 Quantbudgetint v2

    13/19

  • 8/6/2019 Sa f5 Quantbudgetint v2

    14/19

  • 8/6/2019 Sa f5 Quantbudgetint v2

    15/19

    STUDENT ACCOUNTANT PAPER F5

    ACCA 2010

    This method of predicting future amounts is more sophisticated than

    linear regression, but neither method, of course, guarantees an accurateanswer. However, they are at least based on historical evidence and this

    must surely be better than pure guesswork.

    LEARNI NG CURVES

    The learning curve phenomenon was first quantified for business use in

    the early 1900s in the aircraft construction industry. It is perhaps no

    accident that this was where learning was investigated in a commercial

    context because the construction of aircraft was then both highly manual

    and highly complex, and it is exactly these complex, manual tasks which

    give the greatest opportunity for learning.

    The rule for quantifying learning is simple to quote, but very subtle in

    operation:

    The cumulative average time taken to complete a task decreases

    by (or to) a given proportion every time the cumulative output

    doubles.

    Learning curves are easy to tabulate by doubling cumulative

    output and applying the learning curve factor to the cumulative

    average time. For example, if the first item (or batch of items)

    takes 20 hours to make and the learning effect is 80% or (0.8),

    then the table will be:

  • 8/6/2019 Sa f5 Quantbudgetint v2

    16/19

    STUDENT ACCOUNTANT PAPER F5

    ACCA 2010

    Cumulative

    output

    (a)

    Cumulative

    averageunit time

    (b)

    Cumulative

    time

    (a x b)1 20.00 20.00

    2 16.00 32.00

    4 12.80 51.20

    8 10.24 81.92

    16 8.19 131.04

    32 6.55 209.60

    64 5.24 335.36

    EXAMPLE 4

    A typical question would be: 'A company has already made the

    first four units, how long will it take to produce the next four?'

    Questions such as these can only be solved by working with

    cumulative outputs. The table tells us that the first eight items

    will take 81.92 hours in total; the first four of those would have

    taken 51.20 hours. Therefore, the time needed for the second

    four will be:

    81.92 51.20 = 30.72.

    Knowledge of the learning curve effect and its impacts on

    times/unit change is important for the following:

    1 Costing. Variable labour and overheads are directly affected;

    fixed overheads are usually absorbed on a time basis.

    2 Pricing. You need to make sure that your price is

    competitive and, at the same time, that you make a profit.

    0.8

    0.8

    0.8

    0.8

    0.8

    0.8

  • 8/6/2019 Sa f5 Quantbudgetint v2

    17/19

    STUDENT ACCOUNTANT PAPER F5

    ACCA 2010

    3 Scheduling work. If the learning effect is not taken into

    account. machines might become idle because not enough

    work is planned.

    It is instructive to draw a graph of cumulative output against

    cumulative average unit time (Figure 5). This shows that:

    learning is particularly important in the early stages of

    production, shown by a rapid fall off in the cumulative

    average unit times

    learning is much less marked in later stages, showing very

    little improvement.

    FIGURE 5

    In fact, it is usually assumed that eventually the learning effect

    will cease and that the time per unit will reach a steady state.

    Improvement stops because:

    there is a limit to human dexterity

    some processes cannot be speeded up any more (for

    example, a chemical reaction)

    Cumulative average unit time

    Cumulative production

  • 8/6/2019 Sa f5 Quantbudgetint v2

    18/19

    STUDENT ACCOUNTANT PAPER F5

    ACCA 2010

    new, inexperienced staff will replace practised ones from

    time to time, slowing things down again.

    To find the steady state time for production, we usually have tofree ourselves from the tabular approach, which is linked to

    cumulative output doubling. Lets say, in the Example 4, that it

    is believed that a steady state is reached from the 80th item

    onwards. This doesnt fall on a cumulative doubling stage so we

    cant solve it by this using the tablebut there is a formula

    (provided below). Students are expected to have a scientific

    calculator in the exam, and be able to calculate b.

    We can test this formula by using it to calculate the table figure

    for a cumulative output of 32.

    a = 20 x = 32

    log 0.8 = -0.0969 log 2 = 0.3010

    So, b = 0.0969/0.3010 = -0.3219

    Therefore, y, the cumulative time per unit when production has

    reached 32, is:

    y = 20 x 32 -0.3219 = 6.55 (as derived previously in the

    table).

    y = axb

    where

    y = average time per batcha = time for first batchx = cumulative number of batches producedb = log learning rate/log2

  • 8/6/2019 Sa f5 Quantbudgetint v2

    19/19

    STUDENT ACCOUNTANT PAPER F5

    But now back to the steady state problem. We need to find the

    time for the 80th item. This can only be calculated by finding the

    total time for 80 items and then subtracting the total time for the

    first 79 of those items:

    .

    Therefore, the time taken to make the 80th item must be:

    390.41 387.09 = 3.32

    and so, if you were asked to forecast the total time that 100items would take:

    The first 80 would take (from above) 390.41 hours

    The next 20 would take 20 x 3.32 = 66.40 hours

    Total for 100 items 456.81 hours

    Ken Garrett is a freelance write and lecturer

    Cumulative production = 80

    y (the cumulative average time for 1st 80)

    = 20 x 80 -0.3219 = 4.8801

    Total time for first 80:

    = 80 x 4.8801 = 390.41

    Cumulative production = 79

    y (the cumulative average time for 1st 79)

    = 20 x 79 -0.3219 = 4.899

    Total time for first 79:

    = 80 x 4.899 = 387.09