Search Results

Search found 40567 results on 1623 pages for 'database performance'.

Page 292/1623 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • Issue configuring Oracle database for SSL

    - by Santhosha Kaldambe
    Hello, I want to setup Oracle for SSL communication. I am not using SSL authentication for database user. As first requirement, generated self signed certificate using OpenSSL and added certificate to wallet. The wallet location is specified in server configuration. Created listener and it is starting however it does not provide any service. The default listener (non SSL) is working fine. When I execute LSNRCTL.EXE status SSLLISTENER it gives below output. STATUS of the LISTENER Alias SSLLISTENER Version TNSLSNR for 32-bit Windows: Version 11.1.0.6.0 - Production Start Date 14-NOV-2009 01:47:08 Uptime 16 days 22 hr. 14 min. 3 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File C:\app\Administrator\product\11.1.0\db_1\network\admin\listener.ora Listener Log File c:\app\administrator\diag\tnslsnr\\ssllistener\alert\log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=)(PORT =2484))) The listener supports no services The command completed successfully Here is exact content of various files after configuration. 1) File Name: tnsnames.ora ORCL = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) 2) File Name: sqlnet.ora SSL_VERSION = 0 NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) sqlnet.authentication_services= (NONE) tcp.validnode_checking = no tcp.invited_nodes=(PS0803.oraebs.com,PS2948,PS5098) SSL_CLIENT_AUTHENTICATION = FALSE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\Administrator\admin\orcl\Server_Wallet) ) ) 3) File Name: listener.ora SSL_CLIENT_AUTHENTICATION = FALSE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\Administrator\admin\orcl\Server_Wallet) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT 1521)) ) ) SSLLISTENER = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = )(PORT = 2484)) ) Thanks Santhosh

    Read the article

  • schedule backup and restore of SSAS 2008 database

    - by Manjot
    Hi, I can backup and restore databases on Microsoft SQL server Analysis Service 2008 using GUI as from Backup SSAS I want to schedule backup and restore it to another server every night. so what i did is : I scripted out the backup and restore process from the GUI. Created a new SQL server agent job in database engine and added a "Run SSAS query" step. Copied the scripts to this step. But it fails. the scripts that the GUI copied out look like: <Backup xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"> <Object> <DatabaseID>DB</DatabaseID> </Object> <File>C:\Backup\DB.abf</File> <AllowOverwrite>true</AllowOverwrite> </Backup> <Restore xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"> <File>\\server\C$\Backup\DB.abf</File> <DatabaseName>DB</DatabaseName> <AllowOverwrite>true</AllowOverwrite> </Restore> Any help please?

    Read the article

  • sql 2008 disk layout on a budget this is for database mirroring

    - by user22215
    Guys I'm rolling out a SQL database server that will be used to back Sharepoint 2007. Right now I need some advice on my disk layout. I have two Dell servers that are configured a little differently in terms of storage. The principle server will be using a combination of local storage and san storage. I have to work with what I have the organization is currently all allocated on san storage it was like pulling teeth to even get what I have to work with now. My disk setup on the principle is as follows: raid 1 for OS raid 10 for logs raid 10 fiber on san for high IO databases raid 10 sata on san for content databases My question in regards to the principle server is where should I place the temp db? I thought about placing it on the fiber raid 10 which will be hosting my high IO Sharepoint SSP databases my only other choice is to move it to the raid 1 os partition which I’m sure you guys will be against. Now let’s talk about the mirror server it is not connected to the san it is all local 6 15k SAS drives. Now my question is the same do I put tempdb on the os partition or do I leave the os partition and use a single raid 10 for everything? Any help you can provide is much appreciated.

    Read the article

  • Database problem with MS JET

    - by Zimmy-DUB-Zongy-Zong-DUBBY
    I am migrating a bunch of sites which each use an Access database (or whatever an MDB file is). If I try to load the site, I get the following error: Microsoft OLE DB Provider for SQL Server error '80004005' [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied If I rename the MDB file, I get a complaint that the file does not exist, which makes sense. If the file is named correctly, the site tries to load for about 30 seconds or so, and then just fails with the above message. During this waiting period, I can see a lock file being created (and then at some point removed). The MDB file and it's parent dir have full permissions granted to all users. Given that the lock file is successfully created and removed, I don't think that this is a "real" permission issue. The OS is Windows Server 2003 SP2. I am not sure about much more detail on it's config wrt Access databases. I also don't know what version it is expected to be. VB code in question: set oConn=server.createobject("adodb.connection") DSNtemp="Provider=Microsoft.Jet.OLEDB.4.0;Data Source=D:\fullPathGoesHere\db\sitedb.mdb" oConn.Open DSNtemp

    Read the article

  • Moving a lotus database causes incoming mail failure

    - by Logman
    Hey all, I moved a couple lotus note databases to another hd (ran out of space) by using a directory link/pointer on the domino server. USed a text file with the .DIR ext with the desired new path etc... well that works fine. I open up a lotus notes client, use the id and then OPEN the database. The directory link works fine, the new location of the db is opened with no problems. We found out that we couldnt FORWARD any mail: We did the following to make it work: Goto menu FILE|MOBILE|EDIT CURRENT LOCATION Goto tab MAIL and enter the correct path for "Mail File:" example: it should read "mail\morespace\flabor" and not "mail\flabor" "morespace" = directory link/pointer we can forward and reply to emails fine after that fix. The problem is that the user has no incoming email. Domino is still trying to send the incoming mail of that user to its old db location "mail\flabor" instead of "mail\morespace\flabor". Delivery error saying user does not exist. Is this a cache problem? We have reset the server ("Q" at the prompt), though we have not completely shut it down though. Thanks Frank

    Read the article

  • How to create a mysql database that can contain any character, also different languages

    - by Jakke
    I'm trying to create a database that has to contain articles in different languages. I'm using Mariadb as my server and I know bits of SQL. My knowledge doesn't really cover details like the differences between engines like MyISAM, InnoDB etc or character sets like utf8/16/32, latin 5/7/etc. I do know that the character set has importance, I guess what I'm looking for is an all-encompassing character set and an engine that best deals with this type of content. Also, is there an advantage in storing articles in multiple data rows (equivalent of different pages) to make things a little faster, or would you store a whole article in a single data row. Or does that depend on the size of the articles? Sorry for my noobish question, I know the information is all out on the internet but it would take me quite a long time to research and get a grip on everything. Would be cool if someone with experience could give me a little head start and point me in the right direction. This is for a intranet site, consider the content to be somewhat like a blog (and no, I don't want wordpress or something similar at this point). Not sure if it matters, but I tend to create and manipulate my tables with phpmyadmin, I use apache as web server and it all runs on Linux.

    Read the article

  • Issues with "There is already an object named 'xxx' in the database'

    - by Hoser
    I'm fairly new to SQL so this may be an easy mistake, but I haven't been able to find a solid solution anywhere else. Problem is whenever I try to use my temp table, it tells me it cannot be used because there is already an object with that name. I frequently try switching up the names, and sometimes it'll let me work with the table for a little while, but it never lasts for long. Am I dropping the table incorrectly? Also, I've had people suggest to just use a permanent table, but this database does not allow me to do that. create table #RandomTableName(NameOfObject varchar(50), NameOfCounter varchar(50), SampledValue decimal) select vPerformanceRule.ObjectName, vPerformanceRule.CounterName, Perf.vPerfRaw.SampleValue into #RandomTableName from vPerformanceRule, vPerformanceRuleInstance, Perf.vPerfRaw where (ObjectName like 'Processor' AND CounterName like '% Processor Time') OR(ObjectName like 'System' AND CounterName like 'Processor Queue Length') OR(ObjectName like 'Memory' AND CounterName like 'Pages/Sec') OR(ObjectName like 'Physical Disk' AND CounterName like 'Avg. Disk Queue Length') OR(ObjectName like 'Physical Disk' AND CounterName like 'Avg. Disk sec/Read') OR(ObjectName like 'Physical Disk' and CounterName like '% Disk Time') OR(ObjectName like 'Logical Disk' and CounterName like '% Free Space' AND SampleValue > 70 AND SampleValue < 100) order by ObjectName, SampleValue drop table #RandomTableName

    Read the article

  • Improving the performance of XSL

    - by Rachel
    In the below XSL for the variable "insert-data", I have an input param with the structure, <insert-data> <data compareIndex="4" nodeName="d1e1"> <a/> </data> <data compareIndex="5" nodeName="d1e1"> <b/> </data> <data compareIndex="7" nodeName="d1e2"> <a/> </data> <data compareIndex="9" nodeName="d1e2"> <b/> </data> </insert-data> where "nodeName" is the id of a node and "compareIndex" is the position of the text content relative to the node having id "$nodeName". I am using the below XSL to select all the text nodes(generate-id) that satisfy the above condition and construct a data xml. The below implementation works perfectly but the time taken for the execution is in min. Is there a better way of implementing or is there any in-efficient operation being used. From my observation the code where the preceding text length is calculated consumes the major time. Please share your thoughts to improve the performance of the XSL. I am using Java SAXON XSL transformer. <xsl:variable name="insert-data" as="element()*"> <xsl:for-each select="$insert-file/insert-data/data"> <xsl:sort select="xsd:integer(@index)"/> <xsl:variable name="compareIndex" select="xsd:integer(@compareIndex)" /> <xsl:variable name="nodeName" select="@nodeName" /> <xsl:variable name="nodeContent" as="node()"> <xsl:copy-of select="node()"/> </xsl:variable> <xsl:for-each select="$main-root/*//text()[ancestor::*[@id = $nodeName]]"> <xsl:variable name="preTextLength" as="xsd:integer" select="sum((preceding::text())[. ancestor::*[@id = $nodeName]]/string-length(.))" /> <xsl:variable name="currentTextLength" as="xsd:integer" select="string-length(.)" /> <xsl:variable name="sum" select="$preTextLength + $currentTextLength" as="xsd:integer"></xsl:variable> <xsl:variable name="split-index" select="$compareIndex - $preTextLength" as="xsd:integer"></xsl:variable> <xsl:if test="($sum ge $compareIndex) and ($compareIndex gt $preTextLength)"> <data split-index="{$split-index}" text-id="{generate-id(.)}"> <xsl:copy-of select="$nodeContent"/> </data> </xsl:if> </xsl:for-each> </xsl:for-each> </xsl:variable>

    Read the article

  • HttpClient multithread performance

    - by pepper
    I have an application which downloads more than 4500 html pages from 62 target hosts using HttpClient (4.1.3 or 4.2-beta). It runs on Windows 7 64-bit. Processor - Core i7 2600K. Network bandwidth - 54 Mb/s. At this moment it uses such parameters: DefaultHttpClient and PoolingClientConnectionManager; Also it hasIdleConnectionMonitorThread from http://hc.apache.org/httpcomponents-client-ga/tutorial/html/connmgmt.html; Maximum total connections = 80; Default maximum connections per route = 5; For thread management it uses ForkJoinPool with the parallelism level = 5 (Do I understand correctly that it is a number of working threads?) In this case my network usage (in Windows task manager) does not rise above 2.5%. To download 4500 pages it takes 70 minutes. And in HttpClient logs I have such things: DEBUG ForkJoinPool-2-worker-1 [org.apache.http.impl.conn.PoolingClientConnectionManager]: Connection released: [id: 209][route: {}-http://stackoverflow.com][total kept alive: 6; route allocated: 1 of 5; total allocated: 10 of 80] Total allocated connections do not raise above 10-12, in spite of that I've set it up to 80 connections. If I'll try to rise parallelism level to 20 or 80, network usage remains the same but a lot connection time-outs will be generated. I've read tutorials on hc.apache.org (HttpClient Performance Optimization Guide and HttpClient Threading Guide) but they does not help. Task's code looks like this: public class ContentDownloader extends RecursiveAction { private final HttpClient httpClient; private final HttpContext context; private List<Entry> entries; public ContentDownloader(HttpClient httpClient, List<Entry> entries){ this.httpClient = httpClient; context = new BasicHttpContext(); this.entries = entries; } private void computeDirectly(Entry entry){ final HttpGet get = new HttpGet(entry.getLink()); try { HttpResponse response = httpClient.execute(get, context); int statusCode = response.getStatusLine().getStatusCode(); if ( (statusCode >= 400) && (statusCode <= 600) ) { logger.error("Couldn't get content from " + get.getURI().toString() + "\n" + response.toString()); } else { HttpEntity entity = response.getEntity(); if (entity != null) { String htmlContent = EntityUtils.toString(entity).trim(); entry.setHtml(htmlContent); EntityUtils.consumeQuietly(entity); } } } catch (Exception e) { } finally { get.releaseConnection(); } } @Override protected void compute() { if (entries.size() <= 1){ computeDirectly(entries.get(0)); return; } int split = entries.size() / 2; invokeAll(new ContentDownloader(httpClient, entries.subList(0, split)), new ContentDownloader(httpClient, entries.subList(split, entries.size()))); } } And the question is - what is the best practice to use multi threaded HttpClient, may be there is a some rules for setting up ConnectionManager and HttpClient? How can I use all of 80 connections and raise network usage? If necessary, I will provide more code.

    Read the article

  • Doesn't get the output in Java Database Connectivity

    - by Dooree
    I'm working on Java Database Connectivity through Eclipse IDE. I built a database through Ubuntu Terminal, and I need to connect and work with it. However, when I tried to run the following code, I don't get any error, but the following output is showed, anybody knows why I don't get the output from the code ? //STEP 1. Import required packages import java.sql.*; public class FirstExample { // JDBC driver name and database URL static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost/EMP"; // Database credentials static final String USER = "username"; static final String PASS = "password"; public static void main(String[] args) { Connection conn = null; Statement stmt = null; try{ //STEP 2: Register JDBC driver Class.forName("com.mysql.jdbc.Driver"); //STEP 3: Open a connection System.out.println("Connecting to database..."); conn = DriverManager.getConnection(DB_URL,USER,PASS); //STEP 4: Execute a query System.out.println("Creating statement..."); stmt = conn.createStatement(); String sql; sql = "SELECT id, first, last, age FROM Employees"; ResultSet rs = stmt.executeQuery(sql); //STEP 5: Extract data from result set while(rs.next()){ //Retrieve by column name int id = rs.getInt("id"); int age = rs.getInt("age"); String first = rs.getString("first"); String last = rs.getString("last"); //Display values System.out.print("ID: " + id); System.out.print(", Age: " + age); System.out.print(", First: " + first); System.out.println(", Last: " + last); } //STEP 6: Clean-up environment rs.close(); stmt.close(); conn.close(); }catch(SQLException se){ //Handle errors for JDBC se.printStackTrace(); }catch(Exception e){ //Handle errors for Class.forName e.printStackTrace(); }finally{ //finally block used to close resources try{ if(stmt!=null) stmt.close(); }catch(SQLException se2){ }// nothing we can do try{ if(conn!=null) conn.close(); }catch(SQLException se){ se.printStackTrace(); }//end finally try }//end try System.out.println("Goodbye!"); }//end main }//end FirstExample <ConnectionProperties> <PropertyCategory name="Connection/Authentication"> <Property name="user" required="No" default="" sortOrder="-2147483647" since="all"> The user to connect as </Property> <Property name="password" required="No" default="" sortOrder="-2147483646" since="all"> The password to use when connecting </Property> <Property name="socketFactory" required="No" default="com.mysql.jdbc.StandardSocketFactory" sortOrder="4" since="3.0.3"> The name of the class that the driver should use for creating socket connections to the server. This class must implement the interface 'com.mysql.jdbc.SocketFactory' and have public no-args constructor. </Property> <Property name="connectTimeout" required="No" default="0" sortOrder="9" since="3.0.1"> Timeout for socket connect (in milliseconds), with 0 being no timeout. Only works on JDK-1.4 or newer. Defaults to '0'. </Property> ...

    Read the article

  • How can i clear a SQLite database each time i start my app

    - by AndroidUser99
    Hi, i want that each time i start my app my SQLite database get's cleaned for this, i need to make a method on my class MyDBAdapter.java code examples are welcome, i have no idea how to do it this is the dbadapter/helper i'm using: public class MyDbAdapter { private static final String TAG = "NotesDbAdapter"; private DatabaseHelper mDbHelper; private SQLiteDatabase mDb; private static final String DATABASE_NAME = "gpslocdb"; private static final String PERMISSION_TABLE_CREATE = "CREATE TABLE permission ( fk_email1 varchar, fk_email2 varchar, validated tinyint, hour1 time default '08:00:00', hour2 time default '20:00:00', date1 date, date2 date, weekend tinyint default '0', fk_type varchar, PRIMARY KEY (fk_email1,fk_email2))"; private static final String USER_TABLE_CREATE = "CREATE TABLE user ( email varchar, password varchar, fullName varchar, mobilePhone varchar, mobileOperatingSystem varchar, PRIMARY KEY (email))"; private static final int DATABASE_VERSION = 2; private final Context mCtx; private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(PERMISSION_TABLE_CREATE); db.execSQL(USER_TABLE_CREATE); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { db.execSQL("DROP TABLE IF EXISTS user"); db.execSQL("DROP TABLE IF EXISTS permission"); onCreate(db); } } /** * Constructor - takes the context to allow the database to be * opened/created * * @param ctx the Context within which to work */ public MyDbAdapter(Context ctx) { this.mCtx = ctx; } /** * Open the database. If it cannot be opened, try to create a new * instance of the database. If it cannot be created, throw an exception to * signal the failure * * @return this (self reference, allowing this to be chained in an * initialization call) * @throws SQLException if the database could be neither opened or created */ public MyDbAdapter open() throws SQLException { mDbHelper = new DatabaseHelper(mCtx); mDb = mDbHelper.getWritableDatabase(); return this; } public void close() { mDbHelper.close(); } public long createUser(String email, String password, String fullName, String mobilePhone, String mobileOperatingSystem) { ContentValues initialValues = new ContentValues(); initialValues.put("email",email); initialValues.put("password",password); initialValues.put("fullName",fullName); initialValues.put("mobilePhone",mobilePhone); initialValues.put("mobileOperatingSystem",mobileOperatingSystem); return mDb.insert("user", null, initialValues); } public Cursor fetchAllUsers() { return mDb.query("user", new String[] {"email", "password", "fullName", "mobilePhone", "mobileOperatingSystem"}, null, null, null, null, null); } public Cursor fetchUser(String email) throws SQLException { Cursor mCursor = mDb.query(true, "user", new String[] {"email", "password", "fullName", "mobilePhone", "mobileOperatingSystem"} , "email" + "=" + email, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } public List<Friend> retrieveAllUsers() { List <Friend> friends=new ArrayList<Friend>(); Cursor result=fetchAllUsers(); if( result.moveToFirst() ){ do{ //note.getString(note.getColumnIndexOrThrow(NotesDbAdapter.KEY_TITLE))); friends.add(new Friend(result.getString(result.getColumnIndexOrThrow("email")), "","","","")); }while( result.moveToNext() ); } return friends; } }

    Read the article

  • Persisting objects to a database using a loop

    - by ChaoticLoki
    I have a form that returns multiple values and each value I would like to store in a database, I created a test form for this purpose <form method="post" action="{{ path('submit_exercise', {'page': 1}) }}"> <input type="hidden" name="answers[]" value="Answer 1" "/> <input type="hidden" name="answers[]" value="Answer 2" "/> <input type="hidden" name="answers[]" value="Answer 3" "/> <input type="hidden" name="answers[]" value="Answer 4" "/> <input type="hidden" name="answers[]" value="Answer 5" "/> <input type="hidden" name="answers[]" value="Answer 6" "/> <input type="hidden" name="answers[]" value="Answer 7" "/> <input type="submit" name="submit" /> </form> </body> </html> My submit answers Action is currently written like so. public function submitAnswersAction($page) { //get submitted data $data = $this->get('request')->request->all(); $answers = $data['answers']; //get student ID $user = $this->get('security.context')->getToken()->getUser(); $studentID = $user->getId(); //Get Current time $currentTime = new \DateTime(date("Y-m-d H:m:s")); //var_dump($answers); //var_dump($studentID); //var_dump($currentTime); for($index = 0; $index < count($answers); $index++) { /*echo "Question ". ($index + 1) ."<br />"; echo "Student ID: ". $studentID."<br />"; echo "Page Number: $page <br />"; echo "Answer: $answers[$index]"."<br />"; echo "<br />";*/ $studentAnswer = new StudentAnswer(); $studentAnswer->setStudentID($studentID); $studentAnswer->setPageID($page); $studentAnswer->setQuestionID($index+1); $studentAnswer->setAnswer($answers[$index]); $studentAnswer->setDateCreated($currentTime); $studentAnswer->setReadStatus(0); $database = $this->getDoctrine()->getManager(); $database->persist($studentAnswer); $database->flush(); } return new Response('Answers saved for Student: '.$user->getFullName().' for page: '.$page); When I do a var_dump everything seems to be associated correctly, meaning that the answers array is populated with the right data and so is every other variable, my problem is actually persisting it to the database. when run it returns with this error which seems to me like it doesn't know what variables to put into the row. An exception occurred while executing 'INSERT INTO Student_Answer (student_id, page_id, question_id, answer, read, date_created) VALUES (?, ?, ?, ?, ?, ?)' with params {"1":2,"2":"1","3":1,"4":"Answer 1","5":0,"6":"2012-12-11 12:12:20"}: Any help would be greatly appreciated as this is a personal project to help me try and understand web development a bit more.

    Read the article

  • TOTD #166: Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now!

    - by arungupta
    The Java EE 6 platform includes Java Persistence API to work with RDBMS. The JPA specification defines a comprehensive API that includes, but not restricted to, how a database table can be mapped to a POJO and vice versa, provides mechanisms how a PersistenceContext can be injected in a @Stateless bean and then be used for performing different operations on the database table and write typesafe queries. There are several well known advantages of RDBMS but the NoSQL movement has gained traction over past couple of years. The NoSQL databases are not intended to be a replacement for the mainstream RDBMS. As Philosophy of NoSQL explains, NoSQL database was designed for casual use where all the features typically provided by an RDBMS are not required. The name "NoSQL" is more of a category of databases that is more known for what it is not rather than what it is. The basic principles of NoSQL database are: No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them. Scalability and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD. Not be restricted to SQL to access the information stored in the backing database. Designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Unlike RBDMS which require a separate caching tier, most of the NoSQL databases comes with integrated caching. Designed for less management and simpler data models lead to lower administration as well. There are primarily three types of NoSQL databases: Key-Value stores (e.g. Cassandra and Riak) Document databases (MongoDB or CouchDB) Graph databases (Neo4J) You may think NoSQL is panacea but as I mentioned above they are not meant to replace the mainstream databases and here is why: RDBMS have been around for many years, very stable, and functionally rich. This is something CIOs and CTOs can bet their money on without much worry. There is a reason 98% of Fortune 100 companies run Oracle :-) NoSQL is cutting edge, brings excitement to developers, but enterprises are cautious about them. Commercial databases like Oracle are well supported by the backing enterprises in terms of providing support resources on a global scale. There is a full ecosystem built around these commercial databases providing training, performance tuning, architecture guidance, and everything else. NoSQL is fairly new and typically backed by a single company not able to meet the scale of these big enterprises. NoSQL databases are good for CRUDing operations but business intelligence is extremely important for enterprises to stay competitive. RDBMS provide extensive tooling to generate this data but that was not the original intention of NoSQL databases and is lacking in that area. Generating any meaningful information other than CRUDing require extensive programming. Not suited for complex transactions such as banking systems or other highly transactional applications requiring 2-phase commit. SQL cannot be used with NoSQL databases and writing simple queries can be involving. Enough talking, lets take a look at some code. This blog has published multiple blogs on how to access a RDBMS using JPA in a Java EE 6 application. This Tip Of The Day (TOTD) will show you can use MongoDB (a document-oriented database) with a typical 3-tier Java EE 6 application. Lets get started! The complete source code of this project can be downloaded here. Download MongoDB for your platform from here (1.8.2 as of this writing) and start the server as: arun@ArunUbuntu:~/tools/mongodb-linux-x86_64-1.8.2/bin$./mongod./mongod --help for help and startup optionsSun Jun 26 20:41:11 [initandlisten] MongoDB starting : pid=11210port=27017 dbpath=/data/db/ 64-bit Sun Jun 26 20:41:11 [initandlisten] db version v1.8.2, pdfile version4.5Sun Jun 26 20:41:11 [initandlisten] git version:433bbaa14aaba6860da15bd4de8edf600f56501bSun Jun 26 20:41:11 [initandlisten] build sys info: Linuxbs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 2017:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41Sun Jun 26 20:41:11 [initandlisten] waiting for connections on port 27017Sun Jun 26 20:41:11 [websvr] web admin interface listening on port 28017 The default directory for the database is /data/db and needs to be created as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db You can specify a different directory using "--dbpath" option. Refer to Quickstart for your specific platform. Using NetBeans, create a Java EE 6 project and make sure to enable CDI and add JavaServer Faces framework. Download MongoDB Java Driver (2.6.3 of this writing) and add it to the project library by selecting "Properties", "LIbraries", "Add Library...", creating a new library by specifying the location of the JAR file, and adding the library to the created project. Edit the generated "index.xhtml" such that it looks like: <h1>Add a new movie</h1><h:form> Name: <h:inputText value="#{movie.name}" size="20"/><br/> Year: <h:inputText value="#{movie.year}" size="6"/><br/> Language: <h:inputText value="#{movie.language}" size="20"/><br/> <h:commandButton actionListener="#{movieSessionBean.createMovie}" action="show" title="Add" value="submit"/></h:form> This page has a simple HTML form with three text boxes and a submit button. The text boxes take name, year, and language of a movie and the submit button invokes the "createMovie" method of "movieSessionBean" and then render "show.xhtml". Create "show.xhtml" ("New" -> "Other..." -> "Other" -> "XHTML File") such that it looks like: <head> <title><h1>List of movies</h1></title> </head> <body> <h:form> <h:dataTable value="#{movieSessionBean.movies}" var="m" > <h:column><f:facet name="header">Name</f:facet>#{m.name}</h:column> <h:column><f:facet name="header">Year</f:facet>#{m.year}</h:column> <h:column><f:facet name="header">Language</f:facet>#{m.language}</h:column> </h:dataTable> </h:form> This page shows the name, year, and language of all movies stored in the database so far. The list of movies is returned by "movieSessionBean.movies" property. Now create the "Movie" class such that it looks like: import com.mongodb.BasicDBObject;import com.mongodb.BasicDBObject;import com.mongodb.DBObject;import javax.enterprise.inject.Model;import javax.validation.constraints.Size;/** * @author arun */@Modelpublic class Movie { @Size(min=1, max=20) private String name; @Size(min=1, max=20) private String language; private int year; // getters and setters for "name", "year", "language" public BasicDBObject toDBObject() { BasicDBObject doc = new BasicDBObject(); doc.put("name", name); doc.put("year", year); doc.put("language", language); return doc; } public static Movie fromDBObject(DBObject doc) { Movie m = new Movie(); m.name = (String)doc.get("name"); m.year = (int)doc.get("year"); m.language = (String)doc.get("language"); return m; } @Override public String toString() { return name + ", " + year + ", " + language; }} Other than the usual boilerplate code, the key methods here are "toDBObject" and "fromDBObject". These methods provide a conversion from "Movie" -> "DBObject" and vice versa. The "DBObject" is a MongoDB class that comes as part of the mongo-2.6.3.jar file and which we added to our project earlier.  The complete javadoc for 2.6.3 can be seen here. Notice, this class also uses Bean Validation constraints and will be honored by the JSF layer. Finally, create "MovieSessionBean" stateless EJB with all the business logic such that it looks like: package org.glassfish.samples;import com.mongodb.BasicDBObject;import com.mongodb.DB;import com.mongodb.DBCollection;import com.mongodb.DBCursor;import com.mongodb.DBObject;import com.mongodb.Mongo;import java.net.UnknownHostException;import java.util.ArrayList;import java.util.List;import javax.annotation.PostConstruct;import javax.ejb.Stateless;import javax.inject.Inject;import javax.inject.Named;/** * @author arun */@Stateless@Namedpublic class MovieSessionBean { @Inject Movie movie; DBCollection movieColl; @PostConstruct private void initDB() throws UnknownHostException { Mongo m = new Mongo(); DB db = m.getDB("movieDB"); movieColl = db.getCollection("movies"); if (movieColl == null) { movieColl = db.createCollection("movies", null); } } public void createMovie() { BasicDBObject doc = movie.toDBObject(); movieColl.insert(doc); } public List<Movie> getMovies() { List<Movie> movies = new ArrayList(); DBCursor cur = movieColl.find(); System.out.println("getMovies: Found " + cur.size() + " movie(s)"); for (DBObject dbo : cur.toArray()) { movies.add(Movie.fromDBObject(dbo)); } return movies; }} The database is initialized in @PostConstruct. Instead of a working with a database table, NoSQL databases work with a schema-less document. The "Movie" class is the document in our case and stored in the collection "movies". The collection allows us to perform query functions on all movies. The "getMovies" method invokes "find" method on the collection which is equivalent to the SQL query "select * from movies" and then returns a List<Movie>. Also notice that there is no "persistence.xml" in the project. Right-click and run the project to see the output as: Enter some values in the text box and click on enter to see the result as: If you reached here then you've successfully used MongoDB in your Java EE 6 application, congratulations! Some food for thought and further play ... SQL to MongoDB mapping shows mapping between traditional SQL -> Mongo query language. Tutorial shows fun things you can do with MongoDB. Try the interactive online shell  The cookbook provides common ways of using MongoDB In terms of this project, here are some tasks that can be tried: Encapsulate database management in a JPA persistence provider. Is it even worth it because the capabilities are going to be very different ? MongoDB uses "BSonObject" class for JSON representation, add @XmlRootElement on a POJO and how a compatible JSON representation can be generated. This will make the fromXXX and toXXX methods redundant.

    Read the article

  • How do I sell Oracle? It all starts with the database

    - by swiesner
    Partner sales reps often ask me how they start a conversation with their customers around Oracle. The first question should be, "are you running an Oracle database?" Much of what we do at Oracle is intended to optimize our customers' investments in the Oracle database. Many of our acquisitions, new features in existing products, new applications, are often designed to improve the experience of the 400,000 Oracle database customers worldwide. Once you find an Oracle database customer, the next set of questions are much easier, but depend on your expertise: License Management, Upsell, and Services. Let's start with License Management: Have there been any changes to your infrastructure since you purchased the database licenses? Have you upgraded the servers on which the Oracle database is running? Have the number of users or employees increased since the last license purchase? Yes to any of these questions will lead to investigating correct licensing. The goal is to provide a "soft" license review. Oracle generally does not require any license keys to install our software, so we need to help our customers with compliance. Correct licensing is essential to managing costs, and can provide a great way to efficiently manage IT spend. You might want to contact your local Oracle Channel Manager or VAD for licensing assistance. Or, review the Software Investment Guide. Naturally, these questions can lead to upsell opportunities. If your customer has invested in Oracle technology to manage their data, that data is essential to running their business. We'll take a look at those upsell questions in a later blog post.

    Read the article

  • Improving Performance of Crystal Reports using Stored Procedures

    - by mjh41
    Recently I updated a Crystal Report that was doing all of its work on the client-side (Selects, formulas, etc) and changed all of the logic to be done on the server-side through Stored Procedures using an Oracle 11g database. Now the report is only being used to display the output of the stored procedures and nothing else. Everything I have read on this subject says that utilizing stored procedures should greatly reduce the running time of the report, but it still takes roughly the same amount of time to retrieve the data from the server. Is there something wrong with the stored procedure I have written, or is the issue in the Crystal Report itself? Here is the stored procedure code along with the package that defines the necessary REF CURSOR. CREATE OR REPLACE PROCEDURE SP90_INVENTORYDATA_ALL ( invdata_cur IN OUT sftnecm.inv_data_all_pkg.inv_data_all_type, dCurrentEndDate IN vw_METADATA.CASEENTRCVDDATE%type, dCurrentStartDate IN vw_METADATA.CASEENTRCVDDATE%type ) AS BEGIN OPEN invdata_cur FOR SELECT vw_METADATA.CREATIONTIME, vw_METADATA.RESRESOLUTIONDATE, vw_METADATA.CASEENTRCVDDATE, vw_METADATA.CASESTATUS, vw_METADATA.CASENUMBER, (CASE WHEN vw_METADATA.CASEENTRCVDDATE < dCurrentStartDate AND ( (vw_METADATA.CASESTATUS is null OR vw_METADATA.CASESTATUS != 'Closed') OR TO_DATE(vw_METADATA.RESRESOLUTIONDATE, 'MM/DD/YYYY') >= dCurrentStartDate) then 1 else 0 end) InventoryBegin, (CASE WHEN (to_date(vw_METADATA.RESRESOLUTIONDATE, 'MM/DD/YYYY') BETWEEN dCurrentStartDate AND dCurrentEndDate) AND vw_METADATA.RESRESOLUTIONDATE is not null AND vw_METADATA.CASESTATUS is not null then 1 else 0 end) CaseClosed, (CASE WHEN vw_METADATA.CASEENTRCVDDATE BETWEEN dCurrentStartDate AND dCurrentEndDate then 1 else 0 end) CaseCreated FROM vw_METADATA WHERE vw_METADATA.CASEENTRCVDDATE <= dCurrentEndDate ORDER BY vw_METADATA.CREATIONTIME, vw_METADATA.CASESTATUS; END SP90_INVENTORYDATA_ALL; And the package: CREATE OR REPLACE PACKAGE inv_data_all_pkg AS TYPE inv_data_all_type IS REF CURSOR RETURN inv_data_all_temp%ROWTYPE; END inv_data_all_pkg;

    Read the article

  • Setting up a "cookieless domain" to improve site performance

    - by Django Reinhardt
    I was reading in Google's documentation about improving site speed. One of their recommendations is serving static content (images, css, js, etc.) from a "cookieless domain": Static content, such as images, JS and CSS files, don't need to be accompanied by cookies, as there is no user interaction with these resources. You can decrease request latency by serving static resources from a domain that doesn't serve cookies. Google then says that the best way to do this is to buy a new domain and set it to point to your current one: To reserve a cookieless domain for serving static content, register a new domain name and configure your DNS database with a CNAME record that points the new domain to your existing domain A record. Configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain. In your web pages, reference the domain name in the URLs for the static resources. This is pretty straight forward stuff, except for the bit where it says to "configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain". From what I've read, there's no setting in IIS that allows you to say "serve static resources", so how do I prevent ASP.NET from setting cookies on this new domain? At present, even if I'm just requesting a .jpg from the new domain, it sets a cookie on my browser, even though our application's cookies are set to our old domain. For example, ASP.NET sets an ".ASPXANONYMOUS" cookie that (as far as I'm aware) we're not telling it to do. Apologies if this is a real newb question, I'm new at this! Thanks.

    Read the article

  • Question about SQL Server HierarchyID depth-first performance

    - by AndalusianCat
    I am trying to implement hierarchyID in a table (dbo.[Message]) containing roughly 50,000 rows (will grow substantially in the future). However it takes 30-40 seconds to retrieve about 25 results. The root node is a filler in order to provide uniqueness, therefor every subsequent row is a child of that dummy row. I need to be able to traverse the table depth-first and have made the hierarchyID column (dbo.[Message].MessageID) the clustering primary key, have also added a computed smallint (dbo.[Message].Hierarchy) which stores the level of the node. Usage: A .Net application passes through a hierarchyID value into the database and I want to be able to retrieve all (if any) children AND parents of that node (besides the root, as it is filler). A simplified version of the query I am using: @MessageID hierarchyID /* passed in from application */ SELECT m.MessageID, m.MessageComment FROM dbo.[Message] as m WHERE m.Messageid.IsDescendantOf(@MessageID.GetAncestor((@MessageID.GetLevel()-1))) = 1 ORDER BY m.MessageID From what I understand, the index should be detected automatically without a hint. From searching forums I have seen people utilizing index hints, at least in the case of breadth-first indexes, as apparently CLR calls may be opaque to the query optimizer. I have spent the past few days trying to find a solution for this issue, but to no avail. I would greatly appreciate any assistance, and as this is my first post, I apologize in advance if this would be considered a 'noobish' question, I have read the MS documentation and searched countless forums, but have not came across a succinct description of the specific issue.

    Read the article

  • How to choose light version of databse system

    - by adopilot
    I am starting one POS (Point of sale) project. Targeting system is going to be written in C# .NET 2 WinForms and as main database server We are going to use MS-SQL Server. As we have a lot of POS devices in chain for one store I will love to have backend local data base system on each POS device. Scenario are following: When main server goes down!! POS application should continue working "off-line" with local database, until connection to main server come up again. Now I am in dilemma which local database is going to be most adoptable for me. Here is some notes for helping me point me in right direction: To be Light "My POS devices art usually old and suffering with performances" To be Free "I have a lot of devices and I do not wont additional cost beside main SQL serer" One day Ill love to try all that port on Mono and Linux OS. Here is what I've researched so far: Simple XML "Light but I am afraid of performance, My main table of items is average of 10K records" SQL-Expres "I am afraid that my POS devices is poor with hardware for SQLExpres, and also hard to install on each device and configure" Less known Advantage Database Server have free distribution of offline ADT system. DBF with extended Library,"Respect for good old DBFs but that era is behind Me with clipper and DBFs" MS Access Sqlite "Mostly like for now, but I am afraid how it is going to pair with MS SQL do they have same Data taypes". I know that in this SO is a lot of subjective data, but at least can someone recommended some others lite database system, or things that I shod most take attention before I choice database.

    Read the article

  • Hibernate: Check if object exists/changed

    - by swalkner
    Assuming I have an object Person with long id String firstName String lastName String address Then I'm generating a Person-object somewhere in my application. Now I'd like to check if the person exists in the database (= firstname/lastname-combination is in the database). If not = insert it. If yes, check, if the address is the same. If not = update the address. Of course, I can do some requests (first, try to load object with firstname/lastname), then (if existing), compare the address. But isn't there a simpler, cleaner approach? If got several different classes and do not like to have so many queries. I'd like to use annotations as if to say: firstname/lastname = they're the primary key. Check for them if the object exists. address is the parameter you have to compare if it stayed the same or not. Does Hibernate/JPA (or another framework) support something like that? pseude-code: if (database.containsObject(person)) { //containing according to compound keys if (database.containsChangedObject(person)) { database.updateObject(person); } } else { database.insertObject(person); }

    Read the article

  • General SQL Server query performance

    - by Kiril
    Hey guys, This might be stupid, but databases are not my thing :) Imagine the following scenario. A user can create a post and other users can reply to his post, thus forming a thread. Everything goes in a single table called Posts. All the posts that form a thread are connected with each other through a generated key called ThreadID. This means that when user #1 creates a new post, a ThreadID is generated, and every reply that follows has a ThreadID pointing to the initial post (created by user #1). What I am trying to do is limit the number of replies to let's say 20 per thread. I'm wondering which of the approaches bellow is faster: 1 I add a new integer column (e.x. Counter) to Posts. After a user replies to the initial post, I update the initial post's Counter field. If it reaches 20 I lock the thread. 2 After a user replies to the initial post, I select all the posts that have the same ThreadID. If this collection has more than 20 items, I lock the thread. For further information: I am using SQL Server database and Linq-to-SQL entity model. I'd be glad if you tell me your opinions on the two approaches or share another, faster approach. Best Regards, Kiril

    Read the article

  • ListView slow performance

    - by Mohamed Hemdan
    I've created a list of recipes using Listview/customcursoradapter. A custom layout includes a photo for the recipe , Now I've some problems with the performance of viewing and scrolling the Listview although it has only 10 records (Target is 150). sometimes i get this error java.lang.OutOfMemoryError: bitmap size exceeds VM budget , I've tried to implement the Async task but i failed to do it. Is there any way i can overcome this problem? Your help is highly appreciated !! Here is my GetView method public View getView(int position, View convertView, ViewGroup parent) { View row = super.getView(position, convertView, parent); Cursor cursbbn = getCursor(); if (row == null) { LayoutInflater inflater = (LayoutInflater) localContext.getSystemService(Context.LAYOUT_INFLATER_SERVICE); row = inflater.inflate(R.layout.listtype, null); } String Title = cursbbn.getString(2); String SandID=cursbbn.getString(1); String Readyin = cursbbn.getString(4); String Faovoites=cursbbn.getString(8); TextView titler=(TextView)row.findViewById(R.id.listmaintitle); TextView readyinr=(TextView)row.findViewById(R.id.listreadyin); int colorPos = position % colors.length; row.setBackgroundColor(colors[colorPos]); titler.setText(Title); readyinr.setText(Readyin); ImageView picture = (ImageView) row.findViewById(R.id.imageView1); Bitmap bitImg1 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0001); Bitmap bitImg2 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0002); Bitmap bitImg3 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0003); Bitmap bitImg4 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0004); Bitmap bitImg5 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0005); Bitmap bitImg6 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0006); Bitmap bitImg7 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0007); Bitmap bitImg8 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0008); Bitmap bitImg9 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0009); Bitmap bitImg10 = BitmapFactory.decodeResource(localContext.getResources(), R.drawable.rec0010); if(SandID.contentEquals("0001")) picture.setImageBitmap(getRoundedCornerImage(bitImg1)); if(SandID.contentEquals("0002")) picture.setImageBitmap(getRoundedCornerImage(bitImg2)); if(SandID.contentEquals("0003")) picture.setImageBitmap(getRoundedCornerImage(bitImg3)); if(SandID.contentEquals("0004")) picture.setImageBitmap(getRoundedCornerImage(bitImg4)); if(SandID.contentEquals("0005")) picture.setImageBitmap(getRoundedCornerImage(bitImg5)); if(SandID.contentEquals("0006")) picture.setImageBitmap(getRoundedCornerImage(bitImg6)); if(SandID.contentEquals("0007")) picture.setImageBitmap(getRoundedCornerImage(bitImg7)); if(SandID.contentEquals("0008")) picture.setImageBitmap(getRoundedCornerImage(bitImg8)); if(SandID.contentEquals("0009")) picture.setImageBitmap(getRoundedCornerImage(bitImg9)); if(SandID.contentEquals("0010")) picture.setImageBitmap(getRoundedCornerImage(bitImg10)); return row; } And This is the error : 05-02 03:11:55.898: E/AndroidRuntime(376): FATAL EXCEPTION: main 05-02 03:11:55.898: E/AndroidRuntime(376): java.lang.OutOfMemoryError: bitmap size exceeds VM budget 05-02 03:11:55.898: E/AndroidRuntime(376): at android.graphics.BitmapFactory.nativeDecodeAsset(Native Method) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:460) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.graphics.BitmapFactory.decodeResourceStream(BitmapFactory.java:336) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.graphics.BitmapFactory.decodeResource(BitmapFactory.java:359) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.graphics.BitmapFactory.decodeResource(BitmapFactory.java:385) 05-02 03:11:55.898: E/AndroidRuntime(376): at master.chef.mediamaster.AlternateRowCursorAdapter.getView(AlternateRowCursorAdapter.java:83) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.widget.AbsListView.obtainView(AbsListView.java:1409) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.widget.ListView.makeAndAddView(ListView.java:1745) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.widget.ListView.fillUp(ListView.java:700) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.widget.ListView.fillGap(ListView.java:646) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.widget.AbsListView.trackMotionScroll(AbsListView.java:3399) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.widget.AbsListView.onTouchEvent(AbsListView.java:2233) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.widget.ListView.onTouchEvent(ListView.java:3446) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.View.dispatchTouchEvent(View.java:3885) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:903) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:942) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:942) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:942) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:942) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:942) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:942) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:942) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:942) 05-02 03:11:55.898: E/AndroidRuntime(376): at com.android.internal.policy.impl.PhoneWindow$DecorView.superDispatchTouchEvent(PhoneWindow.java:1691) 05-02 03:11:55.898: E/AndroidRuntime(376): at com.android.internal.policy.impl.PhoneWindow.superDispatchTouchEvent(PhoneWindow.java:1125) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.app.Activity.dispatchTouchEvent(Activity.java:2096) 05-02 03:11:55.898: E/AndroidRuntime(376): at com.android.internal.policy.impl.PhoneWindow$DecorView.dispatchTouchEvent(PhoneWindow.java:1675) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewRoot.deliverPointerEvent(ViewRoot.java:2194) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.view.ViewRoot.handleMessage(ViewRoot.java:1878) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.os.Handler.dispatchMessage(Handler.java:99) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.os.Looper.loop(Looper.java:123) 05-02 03:11:55.898: E/AndroidRuntime(376): at android.app.ActivityThread.main(ActivityThread.java:3683) 05-02 03:11:55.898: E/AndroidRuntime(376): at java.lang.reflect.Method.invokeNative(Native Method) 05-02 03:11:55.898: E/AndroidRuntime(376): at java.lang.reflect.Method.invoke(Method.java:507) 05-02 03:11:55.898: E/AndroidRuntime(376): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839) 05-02 03:11:55.898: E/AndroidRuntime(376): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597) 05-02 03:11:55.898: E/AndroidRuntime(376): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • Stopping cookies being set from a domain (aka "cookieless domain") to increase site performance

    - by Django Reinhardt
    I was reading in Google's documentation about improving site speed. One of their recommendations is serving static content (images, css, js, etc.) from a "cookieless domain": Static content, such as images, JS and CSS files, don't need to be accompanied by cookies, as there is no user interaction with these resources. You can decrease request latency by serving static resources from a domain that doesn't serve cookies. Google then says that the best way to do this is to buy a new domain and set it to point to your current one: To reserve a cookieless domain for serving static content, register a new domain name and configure your DNS database with a CNAME record that points the new domain to your existing domain A record. Configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain. In your web pages, reference the domain name in the URLs for the static resources. This is pretty straight forward stuff, except for the bit where it says to "configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain". From what I've read, there's no setting in IIS that allows you to say "serve static resources", so how do I prevent ASP.NET from setting cookies on this new domain? At present, even if I'm just requesting a .jpg from the new domain, it sets a cookie on my browser, even though our application's cookies are set to our old domain. For example, ASP.NET sets an ".ASPXANONYMOUS" cookie that (as far as I'm aware) we're not telling it to do. Apologies if this is a real newb question, I'm new at this! Thanks.

    Read the article

  • Set service dependencies after install

    - by Dennis
    I have an application that runs as a Windows service. It stores various things settings in a database that are looked up when the service starts. I built the service to support various types of databases (SQL Server, Oracle, MySQL, etc). Often times end users choose to configure the software to use SQL Server (they can simply modify a config file with the connection string and restart the service). The problem is that when their machine boots up, often times SQL Server is started after my service so my service errors out on start up because it can't connect to the database. I know that I can specify dependencies for my service to help guide the Windows service manager to start the appropriate services before mine. However, I don't know what services to depend upon at install time (when my service is registered) since the user can change databases later on. So my question is: is there a way for the user to manually indicate the service dependencies based on the database that they are using? If not, what is the proper design approach that I should be taking? I've thought about trying to do something like wait 30 seconds after my service starts up before connecting to the database but this seems really flaky for various reasons. I've also considered trying to "lazily" connect to the database; the problem is that I need a connection immediately upon start up since the database contains various pieces of vital info that my service needs when it first starts. Any ideas?

    Read the article

  • Cannot connect to MySQL server using JSP

    - by Dibya
    I just set foot on JSP. I started writing simple programs to display dates, system info. Then I tried to connect a MySQL database I have a free hosting account, but I am not able to connect to MySQL database. Here is my code: <%@ page import="java.sql.*" %> <%@ page import="java.io.*" %>  <html> <head> <title>Connection with mysql database</title> </head> <body> <h1>Connection status</h1> <% try { String connectionURL = "jdbc:mysql://mysql2.000webhost.com/a3932573_product"; Connection connection = null; Class.forName("com.mysql.jdbc.Driver").newInstance(); connection = DriverManager.getConnection(connectionURL, "a3932573_dibya", "******"); if(!connection.isClosed()) out.println("Successfully connected to " + "MySQL server using TCP/IP..."); connection.close(); }catch(Exception ex){ out.println("Unable to connect to database."); } %> </font> </body> </html> I am getting Message as Connection Status unable to connect to database. I have tested this connection using PHP using the same username, password and database name. Where am I making mistake?

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >