Search Results

Search found 7 results on 1 pages for 'user289429'.

Page 1/1 | 1 

  • WIfi Problems Ubuntu 13.10

    - by user289429
    i've been searching the forums for two days now but none of the answers seems to work for me. I have an acer extensa 5630 which has an Intel WIfi 5100. But even when i am right next to my router i cannot detect the wireless network. The weird thing is, that when i create a hotspot from my mobile phone it automatically detects it and connects to it.I think i do have the correct drivers installed. Thanks

    Read the article

  • SimpleJdbcCall ignoring JdbcTemplate fetch size

    - by user289429
    We are calling the pl/sql stored procedure through Spring SimpleJdbcCall, the fetchsize set on the JdbcTemplate is being ignored by SimpleJdbcCall. The rowmapper resultset fetch size is set to 10 even though we have set the jdbctemplate fetchsize to 200. Any idea why this happens and how to fix it? Have printed the fetchsize of resultset in the rowmapper in the below code snippet - Once it is 200 and other time it is 10 even though I use the same JdbcTemplate on both occassion. This direct execution through jdbctemplate returns fetchsize of 200 in the row mapper jdbcTemplate = new JdbcTemplate(ds); jdbcTemplate.setResultsMapCaseInsensitive(true); jdbcTemplate.setFetchSize(200); List temp = jdbcTemplate.query("select 1 from dual", new ParameterizedRowMapper() { public Object mapRow(ResultSet resultSet, int i) throws SQLException { System.out.println("Direct template : " + resultSet.getFetchSize()); return new String(resultSet.getString(1)); } }); This execution through SimpleJdbcCall is always returning fetchsize of 10 in the rowmapper jdbcCall = new SimpleJdbcCall(jdbcTemplate).withSchemaName(schemaName) .withCatalogName(catalogName).withProcedureName(functionName); jdbcCall.returningResultSet((String) outItValues.next(), new ParameterizedRowMapper<Map<String, Object>>() { public Map<String, Object> mapRow(ResultSet rs, int row) throws SQLException { System.out.println("Through simplejdbccall " + rs.getFetchSize()); return extractRS(rs, row); } }); outputList = (List<Map<String, Object>>) jdbcCall.executeObject(List.class, inParam);

    Read the article

  • Query performance difference pl/sql forall insert and plain SQL insert

    - by user289429
    We have been using temporary table to store intermediate results in pl/sql Stored procedure. Could anyone tell if there is a performance difference between doing bulk collect insert through pl/sql and a plain SQL insert. Insert into or Cursor for open cursor fetch cursor bulk collect into collection Use FORALL to perform insert Which of the above 2 options is better to insert huge amount of temporary data?

    Read the article

  • PL/SQL profiler missing data

    - by user289429
    We are using pl/sql profiler to collect metrics. We noticed that on one of the environment the plsql_profiler_runs table is populated with the total execution time but the finer details that gets collected in the table plsql_profiler_data is missing. Any idea why this would be happening? We do use dbms_profiler.flush_data() before stopping the profiler and have seen this work fine in another environment.

    Read the article

  • Range partition skip check

    - by user289429
    We have large amount of data partitioned on year value using range partition in oracle. We have used range partition but each partition contains data only for one year. When we write a query targeting a specific year, oracle fetches the information from that partition but still checks if the year is what we have specified. Since this year column is not part of the index it fetches the year from table and compares it. We have seen that any time the query goes to fetch table data it is getting too slow. Can we somehow avoid oracle comparing the year values since we for sure know that the partition contains information for only one year.

    Read the article

  • pipelined function

    - by user289429
    Can someone provide an example of how to use parallel table function in oracle pl/sql. We need to run massive queries for 15 years and combine the result. SELECT * FROM Table(TableFunction(cursor(SELECT * FROM year_table))) ...is what we want effectively. The innermost select will give all the years, and the table function will take each year and run massive query and returns a collection. The problem we have is that all years are being fed to one table function itself, we would rather prefer the table function being called in parallel for each of the year. We tried all sort of partitioning by hash and range and it didn't help. Also, can we drop the keyword PIPELINED from the function declaration? because we are not performing any transformation and just need the aggregate of the resultset.

    Read the article

  • Temporary intermediate table

    - by user289429
    In our project to generate massive reports in oracle we use some permanent table to hold intermediate results. For example to generate one report we run few queries and populate the table, at the final step we join the intermediate table with huge application tables. These intermediate tables are cleared for next report run. We have few concerns in performance areas. These intermediate tables are transactional and don't have statistics. Is it good idea to join these with application tables which are partitioned and have up to date statistics. We need these results stored in the intermediate tables to be available across requests from UI hence we are not in a position to use oracle provided temporary tables. Any thoughts on what could be done would be appreciated.

    Read the article

1