not a GROUP BY expression error in Oracle

Go To StackoverFlow.com

1

I'm new to Oracle and I'm wondering if anyone can tell me what is wrong with my GROUP BY expression. I've tried removing several of the columns in the GROUP BY and nothing works still. There is another query this one shares a UNION with but I've removed that to cut down on reading.

 -- Selection Criteria: Date Range = 12/01/2011 to 12/31/2011; AO Transaction Fee = 4.95%

  SELECT 
    SUBSTR(Transaction_Date,1,10) Transaction_Date
, Item_Number
, REGEXP_REPLACE(SUBSTR(Item_Or_Adj_Description,1,50)
,'([^[:print:]])',' ') AS Item_Or_Adj_Desctription
, Customer_Type
, Document_ID
, Dealer_ID
, Sales_Type
, Item_Quantity
, Total_Fee
, State_Fee
, Transaction_Fee
, AO_Fee
, WDFW_Fee 

  FROM 
  ( 
    -- Sales Transactions 
   SELECT /*+ index(IT ITEM_X4) */ 
      TO_CHAR(IT.it_status_set_date - 2/24, 'MM/DD/YYYY') AS Transaction_Date,  -- Pacific Time
      TO_NUMBER(IT.ic_rcn) AS Item_Number, IT.it_descr AS Item_Or_Adj_Description, 
      DT.di_name AS Customer_Type, IT.it_docid AS Document_ID, IT.ag_id AS Dealer_ID, 
      CASE WHEN UPPER(IST.is_name) = 'ACTIVE' THEN 'SALE' ELSE IST.is_name END AS Sales_Type, 
      NVL(IT.it_quantity * CASE WHEN IT.is_id = 'AC' THEN 1 WHEN IT.is_id = 'DU' THEN 1 ELSE -1 END, 0) AS Item_Quantity,  -- Dups = 1
      NVL(IT.it_state_fee * CASE WHEN IT.is_id IN ('DC','SC') THEN -1.00 ELSE 1.00 END, 0.00) + 
    NVL(IT.it_other_fee * CASE WHEN IT.is_id IN ('DC','SC') THEN -1.00 ELSE 1.00 END, 0.00) AS Total_Fee, 
      NVL(IT.it_state_fee * CASE WHEN IT.is_id IN ('DC','SC') THEN -1.00 ELSE 1.00 END, 0.00) AS State_Fee, 
      NVL(IT.it_other_fee * CASE WHEN IT.is_id IN ('DC','SC') THEN -1.00 ELSE 1.00 END, 0.00) AS Transaction_Fee, 
      CASE WHEN IT.it_other_fee IS NULL OR IT.it_other_fee = 0.00 THEN 0.00 
        ELSE ROUND(IT.it_state_fee * CASE WHEN IT.is_id IN ('DC','SC') THEN -1.00 ELSE 1.00 END * (4.95 / 100), 2) 
      END AS AO_Fee, 
      CASE WHEN IT.it_other_fee IS NULL OR IT.it_other_fee = 0.00 THEN 0.00 
        ELSE (IT.it_other_fee * CASE WHEN IT.is_id IN ('DC','SC') THEN -1.00 ELSE 1.00 END) - 
          ROUND((IT.it_state_fee * CASE WHEN IT.is_id IN ('DC','SC') THEN -1.00 ELSE 1.00 END * (4.95 / 100)), 2) 
      END AS WDFW_Fee 
   FROM ITEM IT 
   JOIN DISCOUNT_TYPE DT ON DT.di_id = IT.di_id 
   JOIN ITEM_STATUS_TYPE IST ON IST.is_id = IT.is_id 
   WHERE IT.it_status_ind = 'A' -- Include active ITEM rows only. 
   AND (IT.is_id IN ('AC','DC','SC') OR (IT.is_id = 'DU' AND NVL(IT.it_state_fee, 0) != 0)) -- Exclude voids, exchanges, and false duplicates.
   AND IT.ic_rcn != '999' -- Exclude Dealer Fees. 
   AND IT.it_status_set_date BETWEEN TO_DATE('12/01/2011 00:00:00', 'MM/DD/YYYY HH24:MI:SS') + 2/24  -- Pacific Time
                 AND TO_DATE('12/31/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS') + 2/24 )


  GROUP BY TO_DATE(SUBSTR(Transaction_Date,1,10), 'MM/DD/YYYY')
           , Item_Number
           , Item_Or_Adj_Description
           ,Customer_Type
           ,Document_ID
           ,Dealer_ID
           ,Sales_Type
           ,Item_Quantity;
          -- ,Total_Fee
         --  ,State_Fee
          -- ,Transaction_Fee
        --  ,AO_Fee
        --  ,WDFW_Fee 
2012-04-04 16:44
by RageQwit
Does it work if you remove the TO_DATE bit in the GROUP BY - Matt Gibson 2012-04-04 16:46


2

It does not appear that you are doing any aggregation in your query (none of the columns in your SELECT list is an aggregate function like COUNT or MAX). Given that, you should just remove the GROUP BY clause entirely.

If, as phlogratos suggest, you are trying to use a GROUP BY to eliminate duplicates, you're doing something wrong. Most commonly, the problem is that the query itself shouldn't be generating duplicate rows but some join condition or predicate was missed causing rows to get duplicated. If that is the case, you're much better off fixing the underlying problem and adding in the missing predicate or the missing join. If you really expect the inline view to return duplicate rows and you want to remove them, you're much better off doing a SELECT DISTINCT <<list of columns>> with no GROUP BY clause.

2012-04-04 16:52
by Justin Cave
Without an aggregate function the group by acts like a distinct statement, so by removing the group by clause you may get the same result multiple times - phlogratos 2012-04-04 16:59
@phlogratos - Interesting point. If that is the intention, then it's almost certainly a bug. In the vast majority of cases, there is some sort of missing join condition or missing predicate that needs to be added to eliminate the duplicate rows. If duplicate rows are actually expected as a result of the joins, the statement should have a DISTINCT in the SELECT - Justin Cave 2012-04-04 17:06


2

You have to remove the columns in the select list, not in the group by clause, or you might add additional columns in the group by clause. All columns in the select list must be mentioned in the group by clause or have to be aggregate functions like min, max or count.

2012-04-04 16:51
by phlogratos
Ads