Search Results

Search found 20873 results on 835 pages for 'url fetch'.

Page 281/835 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue

    - by John-Brown.Evans
    JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue .c21_2{vertical-align:top;width:487.3pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c15_2{vertical-align:top;width:487.3pt;border-style:solid;border-color:#ffffff;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c0_2{padding-left:0pt;direction:ltr;margin-left:36pt} .c20_2{list-style-type:circle;margin:0;padding:0} .c10_2{list-style-type:disc;margin:0;padding:0} .c6_2{background-color:#ffffff} .c17_2{padding-left:0pt;margin-left:72pt} .c3_2{line-height:1.0;direction:ltr} .c1_2{font-size:10pt;font-family:"Courier New"} .c16_2{color:#1155cc;text-decoration:underline} .c13_2{color:inherit;text-decoration:inherit} .c7_2{background-color:#ffff00} .c9_2{border-collapse:collapse} .c2_2{font-family:"Courier New"} .c18_2{font-size:18pt} .c5_2{font-weight:bold} .c19_2{color:#ff0000} .c12_2{background-color:#f3f3f3;border-style:solid;border-color:#000000;border-width:1pt;} .c14_2{font-size:24pt} .c8_2{direction:ltr;background-color:#ffffff} .c11_2{font-style:italic} .c4_2{height:11pt} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt}.subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} This post is the second in a series of JMS articles which demonstrate how to use JMS queues in a SOA context. In the previous post JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g I showed you how to create a JMS queue and its dependent objects in WebLogic Server. In this article, we will use a sample program to write a message to that queue. Please review the previous post if you have not created those objects yet, as they will be required later in this example. The previous post also includes useful background information and links to the Oracle documentation for addional research. The following post in this series will show how to read the message from the queue again. 1. Source code The following java code will be used to write a message to the JMS queue. It is based on a sample program provided with the WebLogic Server installation. The sample is not installed by default, but needs to be installed manually using the WebLogic Server Custom Installation option, together with many, other useful samples. You can either copy-paste the following code into your editor, or install all the samples. The knowledge base article in My Oracle Support: How To Install WebLogic Server and JMS Samples in WLS 10.3.x (Doc ID 1499719.1) describes how to install the samples. QueueSend.java package examples.jms.queue; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.util.Hashtable; import javax.jms.*; import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException; /** This example shows how to establish a connection * and send messages to the JMS queue. The classes in this * package operate on the same JMS queue. Run the classes together to * witness messages being sent and received, and to browse the queue * for messages. The class is used to send messages to the queue. * * @author Copyright (c) 1999-2005 by BEA Systems, Inc. All Rights Reserved. */ public class QueueSend { // Defines the JNDI context factory. public final static String JNDI_FACTORY="weblogic.jndi.WLInitialContextFactory"; // Defines the JMS context factory. public final static String JMS_FACTORY="jms/TestConnectionFactory"; // Defines the queue. public final static String QUEUE="jms/TestJMSQueue"; private QueueConnectionFactory qconFactory; private QueueConnection qcon; private QueueSession qsession; private QueueSender qsender; private Queue queue; private TextMessage msg; /** * Creates all the necessary objects for sending * messages to a JMS queue. * * @param ctx JNDI initial context * @param queueName name of queue * @exception NamingException if operation cannot be performed * @exception JMSException if JMS fails to initialize due to internal error */ public void init(Context ctx, String queueName) throws NamingException, JMSException { qconFactory = (QueueConnectionFactory) ctx.lookup(JMS_FACTORY); qcon = qconFactory.createQueueConnection(); qsession = qcon.createQueueSession(false, Session.AUTO_ACKNOWLEDGE); queue = (Queue) ctx.lookup(queueName); qsender = qsession.createSender(queue); msg = qsession.createTextMessage(); qcon.start(); } /** * Sends a message to a JMS queue. * * @param message message to be sent * @exception JMSException if JMS fails to send message due to internal error */ public void send(String message) throws JMSException { msg.setText(message); qsender.send(msg); } /** * Closes JMS objects. * @exception JMSException if JMS fails to close objects due to internal error */ public void close() throws JMSException { qsender.close(); qsession.close(); qcon.close(); } /** main() method. * * @param args WebLogic Server URL * @exception Exception if operation fails */ public static void main(String[] args) throws Exception { if (args.length != 1) { System.out.println("Usage: java examples.jms.queue.QueueSend WebLogicURL"); return; } InitialContext ic = getInitialContext(args[0]); QueueSend qs = new QueueSend(); qs.init(ic, QUEUE); readAndSend(qs); qs.close(); } private static void readAndSend(QueueSend qs) throws IOException, JMSException { BufferedReader msgStream = new BufferedReader(new InputStreamReader(System.in)); String line=null; boolean quitNow = false; do { System.out.print("Enter message (\"quit\" to quit): \n"); line = msgStream.readLine(); if (line != null && line.trim().length() != 0) { qs.send(line); System.out.println("JMS Message Sent: "+line+"\n"); quitNow = line.equalsIgnoreCase("quit"); } } while (! quitNow); } private static InitialContext getInitialContext(String url) throws NamingException { Hashtable env = new Hashtable(); env.put(Context.INITIAL_CONTEXT_FACTORY, JNDI_FACTORY); env.put(Context.PROVIDER_URL, url); return new InitialContext(env); } } 2. How to Use This Class 2.1 From the file system on UNIX/Linux Log in to a machine with a WebLogic installation and create a directory to contain the source and code matching the package name, e.g. $HOME/examples/jms/queue. Copy the above QueueSend.java file to this directory. Set the CLASSPATH and environment to match the WebLogic server environment. Go to $MIDDLEWARE_HOME/user_projects/domains/base_domain/bin  and execute . ./setDomainEnv.sh Collect the following information required to run the script: The JNDI name of a JMS queue to use In the Weblogic server console > Services > Messaging > JMS Modules > (Module name, e.g. TestJMSModule) > (JMS queue name, e.g. TestJMSQueue)Select the queue and note its JNDI name, e.g. jms/TestJMSQueue The JNDI name of a connection factory to connect to the queue Follow the same path as above to get the connection factory for the above queue, e.g. TestConnectionFactory and its JNDI namee.g. jms/TestConnectionFactory The URL and port of the WebLogic server running the above queue Check the JMS server for the above queue and the managed server it is targeted to, for example soa_server1. Now find the port this managed server is listening on, by looking at its entry under Environment > Servers in the WLS console, e.g. 8001 The URL for the server to be given to the QueueSend program in this example will therefore be t3://host.domain:8001 e.g. t3://jbevans-lx.de.oracle.com:8001 Edit QueueSend.java and enter the above queue name and connection factory respectively under ...public final static String  JMS_FACTORY=" jms/TestConnectionFactory "; ... public final static String QUEUE=" jms/TestJMSQueue "; ... Compile QueueSend.java using javac QueueSend.java Go to the source’s top-level directory and execute it using java examples.jms.queue.QueueSend t3://jbevans-lx.de.oracle.com:8001 This will prompt for a text input or “quit” to end. In the WLS console, go to the queue and select Monitoring to confirm that a new message was written to the queue. 2.2 From JDeveloper Create a new application in JDeveloper, called, for example JMSTests. When prompted for a project name, enter QueueSend and select Java as the technology Default Package = examples.jms.queue (but you can enter anything here as you will overwrite it in the code later). Leave the other values at their defaults. Press Finish Create a new Java class called QueueSend and use the default values This will create a file called QueueSend.java. Open QueueSend.java, if it is not already open and replace all its contents with the QueueSend java code listed above Some lines might have warnings due to unfound objects. These are due to missing libraries in the JDeveloper project. Add the following libraries to the JDeveloper project: right-click the QueueSend  project in the navigation menu and select Libraries and Classpath , then Add JAR/Directory  Go to the folder containing the JDeveloper installation and find/choose the file javax.jms_1.1.1.jar , e.g. at D:\oracle\jdev11116\modules\javax.jms_1.1.1.jar Do the same for the weblogic.jar file located, for example in D:\oracle\jdev11116\wlserver_10.3\server\lib\weblogic.jar Now you should be able to compile the project, for example by selecting the Make or Rebuild icons   If you try to execute the project, you will get a usage message, as it requires a parameter pointing to the WLS installation containing the JMS queue, for example t3://jbevans-lx.de.oracle.com:8001 . You can automatically pass this parameter to the program from JDeveloper by editing the project’s Run/Debug/Profile. Select the project properties, select Run/Debug/Profile and edit the Default run configuration and add the connection parameter to the Program Arguments field If you execute it again, you will see that it has passed the parameter to the start command If you get a ClassNotFoundException for the class weblogic.jndi.WLInitialContextFactory , then check that the weblogic.jar file was correctly added to the project in one of the earlier steps above. Set the values of JMS_FACTORY and QUEUE the same way as described above in the description of how to use this from a Linux file system, i.e. ...public final static String  JMS_FACTORY=" jms/TestConnectionFactory "; ... public final static String QUEUE=" jms/TestJMSQueue "; ... You need to make one more change to the project. If you execute it now, it will prompt for the payload for the JMS message, but you won’t be able to enter it by default in JDeveloper. You need to enable program input for the project first. Select the project’s properties, then Tool Settings, then check the Allow Program Input checkbox at the bottom and Save. Now when you execute the project, you will get a text entry field at the bottom into which you can enter the payload. You can enter multiple messages until you enter “quit”, which will cause the program to stop. The following screen shot shows the TestJMSQueue’s Monitoring page, after a message was sent to the queue: This concludes the sample. In the following post I will show you how to read the message from the queue again.

    Read the article

  • BIP BIServer Query Debug

    - by Tim Dexter
    With some help from Bryan, I have uncovered a way of being able to debug or at least log what BIServer is doing when BIP sends it a query request. This is not for those of you querying the database directly but if you are using the BIServer and its datamodel to fetch data for a BIP report. If you have written or used the query builder against BIServer and when you run the report it chokes with a cryptic message, that you have no clue about, read on. When BIP runs a piece of BIServer logical SQL to fetch data. It does not appear to validate it, it just passes it through, so what is BIServer doing on its end? As you may know, you are not writing regular physical sql its actually logical sql e.g. select Jobs."Job Title" as "Job Title", Employees."Last Name" as "Last Name", Employees.Salary as Salary, Locations."Department Name" as "Department Name", Locations."Country Name" as "Country Name", Locations."Region Name" as "Region Name" from HR.Locations Locations, HR.Employees Employees, HR.Jobs Jobs The tables might not even be a physical tables, we don't care, that's what the BIServer and its model are for. You have put all the effort into building the model, just go get me the data from where ever it might be. The BIServer takes the logical sql and uses its vast brain to work out what the physical SQL is, executes it and passes the result back to BIP. select distinct T32556.JOB_TITLE as c1, T32543.LAST_NAME as c2, T32543.SALARY as c3, T32537.DEPARTMENT_NAME as c4, T32532.COUNTRY_NAME as c5, T32577.REGION_NAME as c6 from JOBS T32556, REGIONS T32577, COUNTRIES T32532, LOCATIONS T32569, DEPARTMENTS T32537, EMPLOYEES T32543 where ( T32532.COUNTRY_ID = T32569.COUNTRY_ID and T32532.REGION_ID = T32577.REGION_ID and T32537.DEPARTMENT_ID = T32543.DEPARTMENT_ID and T32537.LOCATION_ID = T32569.LOCATION_ID and T32543.JOB_ID = T32556.JOB_ID ) Not a very tough example I know but you get the idea. How do I know what the BIServer is up to? How can I find out what the issue might be if BIServer chokes on my query? There are a couple of steps: In the Administrator tool you need to set the logging level for the Administrator user to something greater than the default '0'. '7' is going to give you the max. Just remember to take it back down after you have finished the debug. I needed to bounce my BIServer service Now here's the secret sauce. Prefix the following to your BIP query set variable LOGLEVEL = 7; Set the log level to that you have in the admin tool Now run your BIP report. With the prefix in place; BIServer will write to the NQQuery.log file. This is located in the ./OracleBI/server/Log directory. In there you are going to find the complete process the BIServer has gone through to try and get the data back for you A quick note, if the BIServer can, its going to hit that great BIEE cache to get your data and you may not see the full log. IF this is the case. Get inot hte Administration page (via the browser login) and clear out your BIP report cursor. Then re-run. This will hopefully help out if you are trying to debug that annoying BIP report that will not run or is getting some strange data. Don't forget to turn that logging level back down once you are done. This will avoid the DBA screaming at you for sucking up all the disk space on the system.

    Read the article

  • Intermittent 404 on select assets, LAMP stack

    - by Tom Lagier
    We have a LAMP stack WordPress server that is serving most assets correctly. However, one plugin's CSS file and several images are returning soft 404s roughly 20% of the time. I can't find any reference to the 404 in the access logs, but the browser is definitely receiving a 404 response from somewhere (WordPress, I would assume). When I use an alias URL that does not match the site URL but does resolve to the asset path, the resource loads correctly 100% of the time. However, using the site url only resolves for the select, problematic assets 20% of the time. You can test one of the problematic assets here: http://www.mreco.org/wp-content/uploads/2014/05/zero-cost.jpg However the alias link always resolves correctly: http://mr-eco.wordpress.promocampaigns.com/wp-content/uploads/2014/05/zero-cost.jpg Stranger, if I attempt to access outdated content that definitely does not exist on the server, at the live URL it returns the content roughly 50% of the time. Using the alias link, it 404s 100% of the time - the correct behavior. Error log and PHP error log are clean. A sample access log (pulled from grep 'zero-cost.jpg' /var/log/httpd/mr-eco-access_log) from several refreshes of the live direct link (where I am not seeing any 404's): 10.166.202.202 - - [28/May/2014:20:27:41 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:42 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:43 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:43 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.176.201.37 - - [28/May/2014:20:27:56 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 200 57027 Chrome's dev tools list the following network activity before displaying 404 page content: zero-cost.jpg /wp-content/uploads/2014/05 GET 404 Not Found text/html Other 15.9?KB 73.2?KB 953?ms 947?ms My Apache configuration is standard, I've listed the virtual host entry and .htaccess file below. I can provide other parts of Apache config if necessary. Virtual host: <VirtualHost *:80> DocumentRoot /var/www/public_html/mr-eco.wordpress.promocampaigns.com ServerName www.mreco.org ServerAlias mreco.org mr-eco.wordpress.promocampaigns.com ErrorLog logs/mr-eco-error_log CustomLog logs/mr-eco-access_log common <Directory /var/www/public_html/mr-eco.wordpress.promocampaigns.com> AllowOverride All SetOutputFilter DEFLATE </Directory> </VirtualHost> .htaccess: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress I have checked for multiple A records and can confirm that there is a single A record pointing at the domain: ;; ANSWER SECTION: mreco.org. 60 IN A 50.18.58.174 I'm fairly new to systems administration, and at a complete loss as to what could cause this. In the past, inconsistently 404ing assets have been because of out-of-sync instances behind a load balancer. In this case, it is a single instance behind the load balancer. Because of the inconsistency, it feels like a caching issue. We don't make use of Apache caching, and as far as I know WordPress should not be caching either. What I've done so far: Reset WordPress permalinks Disabled WordPress plugins Re-generated WordPress .htaccess file Swapped ServerName and ServerAlias directives Cleared browser cache Confirmed disk location of resources Checked PHP, access, and error logs Confirmed correct DNS setup (can post if necessary) I'm at a total loss. Thanks for helping me out!

    Read the article

  • Script to UPDATE STATISTICS with time window

    - by Bill Graziano
    I recently spent some time troubleshooting odd query plans and came to the conclusion that we needed better statistics.  We’ve been running sp_updatestats but apparently it wasn’t sampling enough of the table to get us what we needed.  I have a pretty limited window at night where I can hammer the disks while this runs.  The script below just calls UPDATE STATITICS on all tables that “need” updating.  It defines need as any table whose statistics are older than the number of days you specify (30 by default).  It also has a throttle so it breaks out of the loop after a set amount of time (60 minutes).  That means it won’t start processing a new table after this time but it might take longer than this to finish what it’s doing.  It always processes the oldest statistics first so it will eventually get to all of them.  It defaults to sample 25% of the table.  I’m not sure that’s a good default but it works for now.  I’ve tested this in SQL Server 2005 and SQL Server 2008.  I liked the way Michelle parameterized her re-index script and I took the same approach. CREATE PROCEDURE dbo.UpdateStatistics ( @timeLimit smallint = 60 ,@debug bit = 0 ,@executeSQL bit = 1 ,@samplePercent tinyint = 25 ,@printSQL bit = 1 ,@minDays tinyint = 30 )AS/******************************************************************* Copyright Bill Graziano 2010*******************************************************************/SET NOCOUNT ON;PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Launching...'IF OBJECT_ID('tempdb..#status') IS NOT NULL DROP TABLE #status;CREATE TABLE #status( databaseID INT , databaseName NVARCHAR(128) , objectID INT , page_count INT , schemaName NVARCHAR(128) Null , objectName NVARCHAR(128) Null , lastUpdateDate DATETIME , scanDate DATETIME CONSTRAINT PK_status_tmp PRIMARY KEY CLUSTERED(databaseID, objectID));DECLARE @SQL NVARCHAR(MAX);DECLARE @dbName nvarchar(128);DECLARE @databaseID INT;DECLARE @objectID INT;DECLARE @schemaName NVARCHAR(128);DECLARE @objectName NVARCHAR(128);DECLARE @lastUpdateDate DATETIME;DECLARE @startTime DATETIME;SELECT @startTime = GETDATE();DECLARE cDB CURSORREAD_ONLYFOR select [name] from master.sys.databases where database_id > 4OPEN cDBFETCH NEXT FROM cDB INTO @dbNameWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN SELECT @SQL = ' use ' + QUOTENAME(@dbName) + ' select DB_ID() as databaseID , DB_NAME() as databaseName ,t.object_id ,sum(used_page_count) as page_count ,s.[name] as schemaName ,t.[name] AS objectName , COALESCE(d.stats_date, ''1900-01-01'') , GETDATE() as scanDate from sys.dm_db_partition_stats ps join sys.tables t on t.object_id = ps.object_id join sys.schemas s on s.schema_id = t.schema_id join ( SELECT object_id, MIN(stats_date) as stats_date FROM ( select object_id, stats_date(object_id, stats_id) as stats_date from sys.stats) as d GROUP BY object_id ) as d ON d.object_id = t.object_id where ps.row_count > 0 group by s.[name], t.[name], t.object_id, COALESCE(d.stats_date, ''1900-01-01'') ' SET ANSI_WARNINGS OFF; Insert #status EXEC ( @SQL); SET ANSI_WARNINGS ON; END FETCH NEXT FROM cDB INTO @dbNameENDCLOSE cDBDEALLOCATE cDBDECLARE cStats CURSORREAD_ONLYFOR SELECT databaseID , databaseName , objectID , schemaName , objectName , lastUpdateDate FROM #status WHERE DATEDIFF(dd, lastUpdateDate, GETDATE()) >= @minDays ORDER BY lastUpdateDate ASC, page_count desc, [objectName] ASC OPEN cStatsFETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN IF DATEDIFF(mi, @startTime, GETDATE()) > @timeLimit BEGIN PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + '*** Time Limit Reached ***'; GOTO __DONE; END SELECT @SQL = 'UPDATE STATISTICS ' + QUOTENAME(@dBName) + '.' + QUOTENAME(@schemaName) + '.' + QUOTENAME(@ObjectName) + ' WITH SAMPLE ' + CAST(@samplePercent AS NVARCHAR(100)) + ' PERCENT;'; IF @printSQL = 1 PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + @SQL + ' (Last Updated: ' + CAST(@lastUpdateDate AS VARCHAR(100)) + ')' IF @executeSQL = 1 BEGIN EXEC (@SQL); END END FETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateEND__DONE:CLOSE cStatsDEALLOCATE cStatsPRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Completed.'GO

    Read the article

  • Indexing data from multiple tables with Oracle Text

    - by Roger Ford
    It's well known that Oracle Text indexes perform best when all the data to be indexed is combined into a single index. The query select * from mytable where contains (title, 'dog') 0 or contains (body, 'cat') 0 will tend to perform much worse than select * from mytable where contains (text, 'dog WITHIN title OR cat WITHIN body') 0 For this reason, Oracle Text provides the MULTI_COLUMN_DATASTORE which will combine data from multiple columns into a single index. Effectively, it constructs a "virtual document" at indexing time, which might look something like: <title>the big dog</title> <body>the ginger cat smiles</body> This virtual document can be indexed using either AUTO_SECTION_GROUP, or by explicitly defining sections for title and body, allowing the query as expressed above. Note that we've used a column called "text" - this might have been a dummy column added to the table simply to allow us to create an index on it - or we could created the index on either of the "real" columns - title or body. It should be noted that MULTI_COLUMN_DATASTORE doesn't automatically handle updates to columns used by it - if you create the index on the column text, but specify that columns title and body are to be indexed, you will need to arrange triggers such that the text column is updated whenever title or body are altered. That works fine for single tables. But what if we actually want to combine data from multiple tables? In that case there are two approaches which work well: Create a real table which contains a summary of the information, and create the index on that using the MULTI_COLUMN_DATASTORE. This is simple, and effective, but it does use a lot of disk space as the information to be indexed has to be duplicated. Create our own "virtual" documents using the USER_DATASTORE. The user datastore allows us to specify a PL/SQL procedure which will be used to fetch the data to be indexed, returned in a CLOB, or occasionally in a BLOB or VARCHAR2. This PL/SQL procedure is called once for each row in the table to be indexed, and is passed the ROWID value of the current row being indexed. The actual contents of the procedure is entirely up to the owner, but it is normal to fetch data from one or more columns from database tables. In both cases, we still need to take care of updates - making sure that we have all the triggers necessary to update the indexed column (and, in case 1, the summary table) whenever any of the data to be indexed gets changed. I've written full examples of both these techniques, as SQL scripts to be run in the SQL*Plus tool. You will need to run them as a user who has CTXAPP role and CREATE DIRECTORY privilege. Part of the data to be indexed is a Microsoft Word file called "1.doc". You should create this file in Word, preferably containing the single line of text: "test document". This file can be saved anywhere, but the SQL scripts need to be changed so that the "create or replace directory" command refers to the right location. In the example, I've used C:\doc. multi_table_indexing_1.sql : creates a summary table containing all the data, and uses multi_column_datastore Download link / View in browser multi_table_indexing_2.sql : creates "virtual" documents using a procedure as a user_datastore Download link / View in browser

    Read the article

  • ConcurrentDictionary<TKey,TValue> used with Lazy<T>

    - by Reed
    In a recent thread on the MSDN forum for the TPL, Stephen Toub suggested mixing ConcurrentDictionary<T,U> with Lazy<T>.  This provides a fantastic model for creating a thread safe dictionary of values where the construction of the value type is expensive.  This is an incredibly useful pattern for many operations, such as value caches. The ConcurrentDictionary<TKey, TValue> class was added in .NET 4, and provides a thread-safe, lock free collection of key value pairs.  While this is a fantastic replacement for Dictionary<TKey, TValue>, it has a potential flaw when used with values where construction of the value class is expensive. The typical way this is used is to call a method such as GetOrAdd to fetch or add a value to the dictionary.  It handles all of the thread safety for you, but as a result, if two threads call this simultaneously, two instances of TValue can easily be constructed. If TValue is very expensive to construct, or worse, has side effects if constructed too often, this is less than desirable.  While you can easily work around this with locking, Stephen Toub provided a very clever alternative – using Lazy<TValue> as the value in the dictionary instead. This looks like the following.  Instead of calling: MyValue value = dictionary.GetOrAdd( key, () => new MyValue(key)); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We would instead use a ConcurrentDictionary<TKey, Lazy<TValue>>, and write: MyValue value = dictionary.GetOrAdd( key, () => new Lazy<MyValue>( () => new MyValue(key))) .Value; This simple change dramatically changes how the operation works.  Now, if two threads call this simultaneously, instead of constructing two MyValue instances, we construct two Lazy<MyValue> instances. However, the Lazy<T> class is very cheap to construct.  Unlike “MyValue”, we can safely afford to construct this twice and “throw away” one of the instances. We then call Lazy<T>.Value at the end to fetch our “MyValue” instance.  At this point, GetOrAdd will always return the same instance of Lazy<MyValue>.  Since Lazy<T> doesn’t construct the MyValue instance until requested, the actual MyClass instance returned is only constructed once.

    Read the article

  • How can I best implement 'cache until further notice' with memcache in multiple tiers?

    - by ajreal
    the term "client" used here is not referring to client's browser, but client server Before cache workflow 1. client make a HTTP request --> 2. server process --> 3. store parsed results into memcache for next use (cache indefinitely) --> 4. return results to client --> 5. client get the result, store into client's local memcache with TTL After cache workflow 1. another client make a HTTP request --> 2. memcache found return memcache results to client --> 3. client get the result, store into client's local memcache with TTL TTL = time to live Is possible for me to know when the data was updated, and to expire relevant memcache(s) accordingly. However, the pitfalls on client site cache TTL Any data update before the TTL is not pick-up by client memcache. In reverse manner, where there is no update, client memcache still expire after the TTL First request (or concurrent requests) after cache TTL will get throttle as it need to repeat the "Before cache workflow" In the event where client require several HTTP requests on a single web page, it could be very bad in performance. Ideal solution should be client to cache indefinitely until further notice. Here are the three proposals about futher notice Proposal 1 : Make use on HTTP header (current implementation) 1. client sent HTTP request last modified time header 2. server check if last data modified time=last cache time return status 304 3. client based on header to decide further processing GOOD? ---- - save some parsing for client - lesser data transfer BAD? ---- - fire a HTTP request is still slow - server end still need to process lots of requests Proposal 2 : Consistently issue a HTTP request to check all data group last modified time 1. client fire a HTTP request 2. server to return last modified time for all data group 3. client compare local last cache time with the result 4. if data group last cache time < server last modified time then request again for that data group only GOOD? ---- - only fetch what is no up-to-date - less requests for server BAD? ---- - every web page require a HTTP request Proposal 3 : Tell client when new data is available (Push) 1. when server end notice there is a change on a data group 2. notify clients on the changes 3. help clients to fetch again data 4. then reset client local memcache after data is parsed GOOD? ---- - let the cache act/behave like a true cache BAD? ---- - encourage race condition My preference is on proposal 3, and something like Gearman could be ideal Where there is a change, Gearman server to sent the task to multiple clients (workers). Am I crazy? (I know my first question is a bit crazy)

    Read the article

  • git pull gives error: 401 Authorization Required while accessing https://git.foo.com/bar.git

    - by spuder
    My macbook pro is able to clone/push/pull from the company git server. My cent 6.3 vm gets a 401 error git clone https://git.acme.com/git/torque-setup "error: The requested URL returned error: 401 Authorization Required while accessing https://git.acme.com/git/torque-setup/info/refs As a work around, I've tried creating a folder, with an empty repository, then setting the remote to the company server. I get the same error when trying a git pull The remotes are identical between the machines MacBook Pro (working) git --version git version 1.7.10.2 (Apple Git-33) git remote -v origin https://git.acme.com/git/torque-setup (fetch) origin https://git.acme.com/git/torque-setup (push) Cent 6.3 (not working) yum install -y git git --version git version 1.7.1 git remote -v origin https://git.acme.com/git/torque-setup (fetch) origin https://git.acme.com/git/torque-setup (push) The git server only allows https. Not git or ssh connections. Why is the macbook pro able to do a git pull, while the cent os machine can't? Solution Update 2013-5-15 As jku mentioned, the culprit is the old version of git installed on the cent box. Unfortunately, 1.7.1 is what you get when you run yum install git The work around is to manually install a newer version of git, or simply add the username to the repo git clone https://[email protected]/git/torque-setup

    Read the article

  • I/O error(socket error): [Errno 111] Connection refused

    - by Schitti
    I have a program that uses urllib to periodically fetch a url, and I see intermittent errors like : I/O error(socket error): [Errno 111] Connection refused. It works 90% of the time, but the othe r10% it fails. If retry the fetch immediately after it fails, it succeeds. I'm unable to figure out why this is so. I tried to see if any ports are available, and they are. Any debugging ideas? For additional info, the stack trace is: File "/usr/lib/python2.6/urllib.py", line 235, in retrieve fp = self.open(url, data) File "/usr/lib/python2.6/urllib.py", line 203, in open return getattr(self, name)(url) File "/usr/lib/python2.6/urllib.py", line 342, in open_http h.endheaders() File "/usr/lib/python2.6/httplib.py", line 868, in endheaders self._send_output() File "/usr/lib/python2.6/httplib.py", line 740, in _send_output self.send(msg) File "/usr/lib/python2.6/httplib.py", line 699, in send self.connect() File "/usr/lib/python2.6/httplib.py", line 683, in connect self.timeout) File "/usr/lib/python2.6/socket.py", line 512, in create_connection raise error, msg Edit - A google search isn't very helpful, what I got out of it is that the server I'm fetching from sometimes refuses connections, how can I verify its not a bug in my code and this is indeed the case?

    Read the article

  • Core Audio - CARIngBuffer

    - by tech74
    Hi, Im looking at using the CARingBuffer in iPhone SDK 3.1 Developer\Extras\CoreAudio\PublicUtility, however was a little puzzled about some of its methods. Firstly this will only make sense really to anyone who's used this class For example the GetTimebounds,SetTimeBounds, ClipTimeBounds functions what are these actually doing? Also when using it, i get crashes caused by example this method in the main Fetch method - ZeroABL(abl, 0, destStartOffset * mBytesPerFrame); CARingBufferError CARingBuffer::Fetch(AudioBufferList *abl, UInt32 nFrames, SampleTime startRead) { SampleTime endRead = startRead + nFrames; SampleTime startRead0 = startRead; SampleTime endRead0 = endRead; SampleTime size; CARingBufferError err = ClipTimeBounds(startRead, endRead); if (err) return err; size = endRead - startRead; SInt32 destStartOffset = startRead - startRead0; if (destStartOffset 0) { ZeroABL(abl, 0, destStartOffset * mBytesPerFrame); } Here the destStartOffset has become larger than the size of the abl Bufferlist so when a memset is done it exceeds the boundaries of the abl Bufferlist causing the crash. Why hasn't this class got checks in to prevent this.

    Read the article

  • nhibernate will not cascade delete childs

    - by marn
    The scenario is as follows, I have 3 objects (i simplified the names) named Parent, parent's child & child's child parent's child is a set in parent, and child's child is a set in child. mapping is as follows (relevant parts) parent <set name="parentset" table="pc-table" lazy="false" fetch="subselect" cascade="all-delete-orphan" inverse="true"> <key column=FK_ID_PC" on-delete="cascade"/> <one-to-many class="parentchild,parentchild-ns"/> </set> parent's child <set name="childset" table="cc-table" lazy="false" fetch="subselect" cascade="all-delete-orphan" inverse="true"> <key column="FK_ID_CC" on-delete="cascade"/> <one-to-many class="childschild,childschild-ns"/> </set> What i want to achieve is that when i delete the parent, there would be a cascade delete all the way trough to child's child. But what currently happens is this. (this is purely for mapping test purposes) getting a parent entity (works fine) IQuery query = session.CreateQuery("from Parent where ID =" + ID); IParent doc = query.UniqueResult<Parent>(); now the delete part session.Delete(doc); transaction.Commit(); After having solved the 'cannot insert null value' error with cascading and inverse i hopes this would now delete everything with this code, but only the parent is being deleted. Did i miss something in my mapping which is likely to be missed? Any hint in the right direction is more than welcome!

    Read the article

  • Encoding issue: Cocoa Error 261?

    - by Attacus
    So I'm fetching a JSON string from a php script in my iPhone app using: NSURL *baseURL = [NSURL URLWithString:@"test.php"]; NSError *encodeError = [[NSError alloc] init]; NSString *jsonString = [NSString stringWithContentsOfURL:baseURL encoding:NSUTF8StringEncoding error:&encodeError]; NSLog(@"Error: %@", [encodeError localizedDescription]); NSLog(@"STRING: %@", jsonString); The JSON string validates when I test the output. Now I'm having an encoding issue. When I fetch a single echo'd line such as: { "testKey":"é" } (I'm aware I could\should be using NSUrlConnection for asynchronous fetching of data, but at this point in the app development, I don't really need it.) The JSON parser works fine and I am able to create a valid JSON object. However, when I fetch my 2MB JSON string, I get presented with: Error: Operation could not be completed. (Cocoa error 261.) and a Null string. My PHP file is UTF8 itself and I am not using utf8_encode() because that seems to double encode the data since I'm already pulling the data as NSUTF8StringEncoding. Either way, in my single-echo test, it's the approach that allowed me to successfully log \ASDAS style UTF8 escapes when building the JSON object. What could be causing the error in the case of the larger string? Also, I'm not sure if it makes a difference, but I'm using the php function addslashes() on my parsed php data to account for quotes and such when building the JSON string.

    Read the article

  • We have multiple app servers running against a single database. How do I ensure that each row in a q

    - by Dave
    We have about 7 app servers running .NET windows services that ping a single sql server 2005 queue table and fetch a fixed amount of records to process at fixed intervals. The amount of records to process and the amount of time between fetches are both configurable and are initially set to 100 and 30 seconds initially. Currently, my queue table has an int status column which can be either "Ready, Processing, Complete, Error". The proc that fetches the records has a sql transaction with the following code inside the transaction: 1) Fetch x number of records into temp table where the status is "Ready". The select uses a holdlock hint 2) Update the status on those records in the Queue table to "Processing" The .NET services do some processing that may take seconds or even minutes per record. Another proc is called per record that simply updates the status to "Complete". The update proc has no transaction as I'm leaning on the implicit transaction as part of the update clause here. I don't know the traffic exceptions for this but figure it will be under 10k records per day. Is this the best way to handle this scenario? If so, are there any details that I've left out, such as a hint here or there? Thanks! Dave

    Read the article

  • Implementing Autocompletion in iPhone UITextField for contacts in address book

    - by nacho4d
    Hi, I would like to have a UITextField or UITextView in where as I do some input, alternatives will appear something similar to when you type an address in Mail Application, alternatives appear down and is possible tap them so get a better input user interface.(since there is no need to type the complete word or address or phone number) I do know how to fetch data from Address Book framework, also how to input text in UITextField/UITextView and its delegates but I don't know what kind of structure to use for fetching and showing data as the user do his/her input. I know basic CoreData if this matters, I hope I can get some help. UPDATE (2010/3/10): I don't have problem make a native-like GUI but I am asking about the algorithm, does any body knows what kind algorithm is best for this thing? maybe some binary tree? Or should I just fetch data from coredata everytime? Thanks Ignacio UPDATE (2010/03/28): I've been very busy these days, so I have not tried UISearchResults but it seems fine to me. BUT I wonder was there a necessity of deletion of the wining answer? I don't think is fair my reputation went down and couldn't see the winning answer. ;(

    Read the article

  • Make a Crystal Report with data fetched from two differents tables

    - by Selom
    Hi, Im using vb.net and I need to fetch data from two different tables and display it in form of report. These are the schemas and data of my two tables: CREATE TABLE personal_details (staff_ID integer PRIMARY KEY, title varchar(10), fn varchar(250), ln varchar(250), mn varchar(250), dob varchar(50), hometown varchar(250), securityno varchar(50), phone varchar(15), phone2 varchar(15), phone3 varchar(15), email varchar(250), address varchar(300), confirmation varchar(50), retirement varchar(50), designation varchar(250), region varchar(250)); INSERT INTO personal_details VALUES(1,'Mr.','Selom','AMOUZOU','Kokou','Sunday, March 28, 2010','Ho',7736,'024-747-4883','277-383-8383','027-838-3837','[email protected]','Lapaz Kum Hotel','Sunday, March 28, 2010','Sunday, March 28, 2010','Designeur','Brong Ahafo'); CREATE TABLE training( training_ID integer primary key NOT NULL, staff_ID varchar(100), training_level varchar (60), school_name varchar(100), start_date varchar(100), end_date varchar(100)); INSERT INTO training VALUES(1,1,'Primary School','New School','Feb 1955','May 1973'); INSERT INTO training VALUES(2,1,'Middle/JSS','Ipmc','Feb 1955','May 1973'); Im trying to fetch and display data from the tables above the following way: Dim rpt As New CrystalReport1() Dim da As New SQLiteDataAdapter Dim ds As New presbydbDataSet ds.EnforceConstraints = False If conn.State = ConnectionState.Closed Then conn.Open() End If Dim cmd As New SQLiteCommand("SELECT p.fn, t.training_level FROM personal_details p INNER JOIN training t ON p.staff_ID = t.staff_ID", conn) cmd.ExecuteNonQuery() da.SelectCommand = cmd da.Fill(ds) rpt.SetDataSource(ds) CrystalReportViewer1.ReportSource = rpt conn.Close() My problem is that nothing displays on the report unless I take off either the training fields or the personal_details fields from the report. Need your help. Thanks

    Read the article

  • Cloning git repository from svn repository, results in file-less, remote-branch-less git repo.

    - by Tchalvak
    Working SVN repo I'm starting a git repo to interact with a svn repo. The svn repository is set and working fine, with a single commit of a basic README file in it. Checking it out works fine: tchalvak:~/test/svn-test$ svn checkout --username=myUsernameHere http://www.url.to/project/here/charityweb/ A charityweb/README Checked out revision 1. Failed git-svn clone of svn repo When I try to clone the repository in git, the first step shows no errors... tchalvak:~/test$ git svn clone -s --username=myUserNameHere http://www.url.to/project/here/charityweb/ Initialized empty Git repository in /home/tchalvak/test/charityweb/.git/ Authentication realm: <http://www.url.to/project/here:80> Charity Web Password for 'myUserNameHere': ...but results in a useless folder: tchalvak:~/test$ ls charityweb tchalvak:~/test$ cd charityweb/ tchalvak:~/test/charityweb$ ls tchalvak:~/test/charityweb$ ls -al total 12 drwxr-xr-x 3 tchalvak tchalvak 4096 2010-04-02 13:46 . drwxr-xr-x 4 tchalvak tchalvak 4096 2010-04-02 13:46 .. drwxr-xr-x 8 tchalvak tchalvak 4096 2010-04-02 13:47 .git tchalvak:~/test/charityweb$ git branch -av tchalvak:~/test/charityweb$ git status # On branch master # # Initial commit # nothing to commit (create/copy files and use "git add" to track) tchalvak:~/test/charityweb$ git fetch fatal: Where do you want to fetch from today? tchalvak:~/test/charityweb$ git rebase origin/master fatal: bad revision 'HEAD' fatal: Needed a single revision invalid upstream origin/master tchalvak:~/test/charityweb$ git log fatal: bad default revision 'HEAD' How do I get something I can commit back to? I expect I'm doing something wrong in this process, but what?

    Read the article

  • Post on a logged in users facebook wall

    - by Matt Nathanson
    Hey all, I've created a widget that will essentially unlock a music track, providing you post to either your twitter account, or facebook wall. I've signed up through facebook connect and I am able to successfully post onto my own wall... but the functionality I'm looking for is to be able to take ones username and password and automatically log in to facebook, and send my desired message. Like I said, it posts on my wall successfully, it's just not using the username and password from the field to log in to their respective facebooks and post. <?php $facename = $_POST['facename']; $facepass = $_POST['facepass']; define('FB_APIKEY', 'my_api_key'); define('FB_SECRET', 'my_secret_phrase_'); define('FB_SESSION', 'my_session_id'); require_once('facebook.php'); echo "post on wall"; try { $facebook = new Facebook(FB_APIKEY, FB_SECRET); $facebook->api_client->session_key = FB_SESSION; $fetch = array('friends' => array('pattern' => '.*', 'query' => "select uid2 from friend where uid1={$facename}")); echo $facebook->api_client->admin_setAppProperties(array('preload_fql' => json_encode($fetch))); $message = 'I downloaded Automatic Loveletter\'s new single \'To Die For\' here!'; if( $facebook->api_client->stream_publish($message)) echo "Added on FB Wall"; } catch(Exception $e) { echo $e . "<br />"; } ?> Any help in the right direction is greatly appreciated! Thanks, Matt

    Read the article

  • Lazy loading of Blob properties of one class

    - by Khosro
    Hi, My class contains "summary" and "title" properties those are Blob and other properties. Code:(I write some part of class) public class News extends BaseEntity{ @Lob @Basic(fetch = FetchType.LAZY) public String getSummary() { return summary; } @Lob @Basic(fetch = FetchType.LAZY) public String getTitle() { return title; } @Temporal(TemporalType.TIMESTAMP) public Date getPublishDate() { return publishDate; } } I instrument this class to lazy load of Blob properties using this class "org.hibernate.tool.instrument.javassist.InstrumentTask". When i write this code to retrieve only summary of new , newsDAO.findByid(1L).getSummary(); Hibernate generates these queries: Hibernate: select news0_.id as id1_, news0_.entityVersion as entityVe2_1_, news0_.publishDate as publish15_1_, news0_.url as url1_ from News news0_ Hibernate: select news_.summary as summary1_, news_.title as title1_ from News news_ where news_.id=? I have two qurestions: 1.I only want to retrieve "summary" property not "title" property,but Hibernate queries show that it also retrieve "title" property,Why this happens(i want to lazy load of "property")? It seems when i load one of Blob property ,Hibernate loads all of them.Why?(This is my main question) 2.Why Hibernate generates two queries for retrieving only "summary" property of news? Khosro.

    Read the article

  • Select a distinct record, filtering is not working..

    - by help_inmssql
    Hello EVery I am new to SQl. query to result in the following records. I have a table with records as c1 c2 c3 c4 c5 c6 1 John 2.3.2010 12:09:54 4 7 99 2 mike 2.3.2010 13:09:59 8 6 88 3 ahmad 2.3.2010 13:09:59 1 9 19 4 Jim 23.3.2010 16:35:14 4 5 99 5 run 23.3.2010 12:09:54 3 8 12 I want to fecth only records. i.e only 1 latest record per day. If both of them happen at the same time, sort by C1.so in 1&3 it should fetch 3. 3 ahmad 2.3.2010 14:09:59 1 9 19 4 Jim 23.3.2010 16:35:14 4 5 99 I have got a new problem in this. If i filter the records based on conditions the last record is missing. I tried many ways but still it is failing. Here update_log is my table. SELECT * FROM update_log t1 WHERE (t1.c3) = ( SELECT MAX(t2.c3) FROM update_log t2 WHERE DATEDIFF(dd,t2.c3, t1.c3) = 0 ) and t1.c3 > '02.03.2010' and t1.modified_at <= '22.03.2010' ORDER BY t1.c3 ASC. But i am not able to retrieve the record 4 Jim 23.3.2010 16:35:14 4 5 99 I dont know this query results in only 3 ahmad 2.3.2010 14:09:59 1 9 19 The format of the column c3 is datetime. I am pumping the data into the column as using $date = date("d.m.Y H:i",time()); -- simple date fetch of today. Another query that i tried for the same purpose. select * from (select convert(varchar(10), c3,104) as date, max(c3) as max_date, max(c1) as Nr from update_log group by convert(varchar(10), c3,104)) as t2 inner join update_log as t1 on (t2.max_date = t1.c3 and convert(varchar(10), c3,104) = date and t1.[c1]= Nr) WHERE t1.c3 >= '02.03.2010' and t1.c3 <= '16.04.2010' . I even tried this way..the same error last record is not coming..

    Read the article

  • C# - InvalidCastException when fetching double from sqlite

    - by Irro
    I keep getting a InvalidCastException when I'm fetching any double from my SQLite database in C#. The exception says "Specified cast is not valid." I am able to see the value in a SQL manager so I know it exists. It is possible to fetch Strings (VARCHARS) and ints from the database. I'm also able to fetch the value as an object but then I get "66.0" when it's suppose to be "66,8558604947586" (latitude coordination). Any one who knows how to solve this? My code: using System.Data.SQLite; ... SQLiteConnection conn = new SQLiteConnection(@"Data Source=C:\\database.sqlite; Version=3;"); conn.Open(); SQLiteDataReader reader = getReader(conn, "SELECT * FROM table"); //These are working String name = reader.GetString(1); Int32 value = reader.GetInt32(2); //This is not working Double latitude = reader.getDouble(3); //This gives me wrong value Object o = reader[3]; //or reader["latitude"] or reader.getValue(3)

    Read the article

  • Oracle rownum in db2 - Java data archiving

    - by HonorGod
    I have a data archiving process in java that moves data between db2 and sybase. FYI - This is not done through any import/export process because there are several conditions on each table that are available on run-time and so this process is developed in java. Right now I have single DatabaseReader and DatabaseWriter defined for each source and destination combination so that data is moved in multiple threads. I guess I wanted to expand this further where I can have Multiple DatabaseReaders and Multiple DatabaseWriters defined for each source and destination combination. So, for example if the source data is about 100 rows and I defined 10 readers and 10 writer, each reader will read 10 rows and give them to the writer. I hope process will give me extreme performance depending on the resources available on the server [CPU, Memory etc]. But I guess the problem is these source tables do not have primary keys and it is extremely difficult to grab rows in multiple sets. Oracle provides rownum concept and i guess the life is much simpler there....but how about db2? How can I achieve this behavior with db2? Is there a way to say fetch first 10 records and then fetch next 10 records and so on? Any suggestions / ideas ? Db2 Version - DB2 v8.1.0.144 Fix Pack Num - 16 Linux

    Read the article

  • Git-svn branch hoses dcommit when using an odd branch structure

    - by Chuck Vose
    I had a boss, past-tense, who decided to put svn branches in the same folder as trunk. Normally, this wouldn't affect me that much but since I'm using git-svn things are going so well. After I did a fetch it created a folder for each branch in my root folder so I have three folders, drupal, trunk, and client. The drupal folder is git's master branch, client and trunk are the svn branches. Merging and committing works great, in fact everything git related is working superb. However dcommit is totally hosed, it's trying to commit a folder called client and one called trunk. I can't even imagine what havoc this would cause for svn later on. So my question is, what have I done wrong in my .git/config and is there anything I can do to fix this or am I going to have to suffer and go back to using svn? Please don't make me go back. I don't think I can take it anymore. Bastard boss knows how to leave a legacy. [svn-remote "svn"] url = https://svn.mydomain.com/svn/project_name fetch = trunk:refs/remotes/trunk branches = *:refs/remotes/* tags = tags/*:refs/remotes/tags/* Normally the branches line would look like this (when using --stdlayout): branches = branches/*:refs/remotes/branches/* ls output is thus: $ ls client/ docs/ drupal/ sql/ trunk/

    Read the article

  • Hibernate: order multiple one-to-many relations

    - by Markos Fragkakis
    I have a search screen, using JSF, JBoss Seam and Hibernate underneath. There are columns for A, B and C, where the relations are as follows: A (1< -- ) B (1< -- ) C A has a List< B and B has a List< C (both relations are one-to-many). The UI table supports ordering by any column (ASC or DESC), so I want the results of the query to be ordered. This is the reason I used Lists in the model. However, I got an exception that Hibernate cannot eagerly fetch multiple bags (it considers both lists to be bags). There is an interesting blog post here, and they identify the following solutions: Use @IndexColumn annotation (there is none in my DB, and what's more, I want the position of results to be determined by the ordering, not by an index column) Fetch lazily (for performance reasons, I need eager fetching) Change List to Set So, I changed the List to Set, which by the way is more correct, model-wise. First, if don't use @OrderBy, the PersistentSet returned by Hibernate wraps a HashSet, which has no ordering. Second, If I do use @OrderBy, the PersistentSet wraps a LinkedHashSet, which is what I would like, but the OrderBy property is hardcoded, so all other ordering I perform through the UI comes after it. I tried again with Sets, and used SortedSet (and its implementation, TreeSet), but I have some issues: I want ordering to take place in the DB, and not in-memory, which is what TreeSet does (either through a Comparator, or through the Comparable interface of the elements). Second, I found that there is the Hibernate annotation @Sort, which has a SortOrder.UNSORTED and you can also set a Comparator. I still haven't managed to make it compile, but I am still not convinced it is what I need. Any advice?

    Read the article

  • video streaming over http in blackberry

    - by ysnky
    hi all, while i was searching video player over http, i found the article which is located at this url; http://www.blackberry.com/knowledgecenterpublic/livelink.exe/fetch/2000/348583/800332/1089414/Stream ing_media_-_Start_to_finish.html?nodeid=2456737&ve rnum=0 i can run by adding ";deviceside=true" at the end of url. it works fine in the jde4.5 simulator. it gets 3gp videos from my local server. i tested with 580kb files and works fine. but when i get the same file from my server (not local, real server) i have problems with big files (e.g 580 kb). it plays 180kb files (but sometimes it does not play this file either) but not plays 580kb file. and also i deployed my application to my 9000 device it sometimes plays small file (180kb) but never plays big file (580kb). why it plays if it is on my local file, not play in real world? i ve stucked for days. hope you help me. and also the code at the url given below is not work, the only code i ve found is the above. blackberry.com/knowledgecenterpublic/livelink.exe/fetch/2000/348583/800332/1089414/How_To _-_Play_video_within_a_BlackBerry_smartphone_appli cation.html?nodeid=1383173&vernum=0 btw, there is no method such as resize(long param) of CircularByteBuffer class. so i comment relavent line (buffer.resize(buffer.getSize() + (buffer.getSize() * percent / 100)); as shown below. public void increaseBufferCapacity(int percent) { if(percent < 0){ log(0, "FAILED! SP.setBufferCapacity() - " + percent); throw new IllegalArgumentException("Increase factor must be positive.."); } synchronized(readLock){ synchronized(connectionLock){ synchronized(userSeekLock){ synchronized(mediaIStream){ log(0, "SP.setBufferCapacity() - " + percent); //buffer.resize(buffer.getSize() + (buffer.getSize() * percent / 100)); this.bufferCapacity = buffer.getSize(); } } } } } thanks in advance.

    Read the article

  • sql 2008 sqldmo alternative

    - by alexdelpiero
    Hi! I previously was using sqldmo to automatically generate scripts from the databse. Now I upgraded to sql server 2008 and I don’t want to use this feature anymore since Microsoft will be dropping this feature off. Is there any other alternative I can use to connect to a server and generate scripts automatically from a database? Any answer is welcome. Thanks in advance. This is the procedure i was previously using: CREATE PROC GenerateSP ( @server varchar(30) = null, @uname varchar(30) = null, @pwd varchar(30) = null, @dbname varchar(30) = null, @filename varchar(200) = 'c:\script.sql' ) AS DECLARE @object int DECLARE @hr int DECLARE @return varchar(200) DECLARE @exec_str varchar(2000) DECLARE @spname sysname SET NOCOUNT ON -- Sets the server to the local server IF @server is NULL SELECT @server = @@servername -- Sets the database to the current database IF @dbname is NULL SELECT @dbname = db_name() -- Sets the username to the current user name IF @uname is NULL SELECT @uname = SYSTEM_USER -- Create an object that points to the SQL Server EXEC @hr = sp_OACreate 'SQLDMO.SQLServer', @object OUT IF @hr < 0 BEGIN PRINT 'error create SQLOLE.SQLServer' RETURN END -- Connect to the SQL Server IF @pwd is NULL BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname IF @hr < 0 BEGIN PRINT 'error Connect' RETURN END END ELSE BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname, @pwd IF @hr < 0 BEGIN PRINT 'error Connect' RETURN END END --Verify the connection EXEC @hr = sp_OAMethod @object, 'VerifyConnection', @return OUT IF @hr < 0 BEGIN PRINT 'error VerifyConnection' RETURN END SET @exec_str = 'DECLARE script_cursor CURSOR FOR SELECT name FROM ' + @dbname + '..sysobjects WHERE type = ''P'' ORDER BY Name' EXEC (@exec_str) OPEN script_cursor FETCH NEXT FROM script_cursor INTO @spname WHILE (@@fetch_status < -1) BEGIN SET @exec_str = 'Databases("'+ @dbname +'").StoredProcedures("'+RTRIM(UPPER(@spname))+'").Script(74077,"'+ @filename +'")' EXEC @hr = sp_OAMethod @object, @exec_str, @return OUT IF @hr < 0 BEGIN PRINT 'error Script' RETURN END FETCH NEXT FROM script_cursor INTO @spname END CLOSE script_cursor DEALLOCATE script_cursor -- Destroy the object EXEC @hr = sp_OADestroy @object IF @hr < 0 BEGIN PRINT 'error destroy object' RETURN END GO

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >