Search Results

Search found 95201 results on 3809 pages for 'system data sqlite'.

Page 130/3809 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • Is it possible to have an enum field in a class persisted with OrmLite?

    - by htf
    Hello. I'm trying to persist the following class with OrmLite: public class Field { @DatabaseField(id = true) public String name; @DatabaseField(canBeNull = false) public FieldType type; public Field() { } } The FieldType is a public enum. The field, corresponding to the type is string in SQLite (is doesn't support enums). When I try to use it, I get the following exception: INFO [main] (SingleConnectionDataSource.java:244) - Established shared JDBC Connection: org.sqlite.Conn@5224ee Exception in thread "main" org.springframework.beans.factory.BeanInitializationException: Initialization of DAO failed; nested exception is java.lang.IllegalArgumentException: Unknown field class class enums.FieldType for field FieldType:name=type,class=class orm.Field at org.springframework.dao.support.DaoSupport.afterPropertiesSet(DaoSupport.java:51) at orm.FieldDAO.getInstance(FieldDAO.java:17) at orm.Field.fromString(Field.java:23) at orm.Field.main(Field.java:38) Caused by: java.lang.IllegalArgumentException: Unknown field class class enums.FieldType for field FieldType:name=type,class=class orm.Field at com.j256.ormlite.field.FieldType.<init>(FieldType.java:54) at com.j256.ormlite.field.FieldType.createFieldType(FieldType.java:381) at com.j256.ormlite.table.DatabaseTableConfig.fromClass(DatabaseTableConfig.java:82) at com.j256.ormlite.dao.BaseJdbcDao.initDao(BaseJdbcDao.java:116) at org.springframework.dao.support.DaoSupport.afterPropertiesSet(DaoSupport.java:48) ... 3 more So how do I tell OrmLite, values on the Java side are from an enum?

    Read the article

  • Faster Insertion of Records into a Table with SQLAlchemy

    - by Kyle Brandt
    I am parsing a log and inserting it into either MySQL or SQLite using SQLAlchemy and Python. Right now I open a connection to the DB, and as I loop over each line, I insert it after it is parsed (This is just one big table right now, not very experienced with SQL). I then close the connection when the loop is done. The summarized code is: log_table = schema.Table('log_table', metadata, schema.Column('id', types.Integer, primary_key=True), schema.Column('time', types.DateTime), schema.Column('ip', types.String(length=15)) .... engine = create_engine(...) metadata.bind = engine connection = engine.connect() .... for line in file_to_parse: m = line_regex.match(line) if m: fields = m.groupdict() pythonified = pythoninfy_log(fields) #Turn them into ints, datatimes, etc if use_sql: ins = log_table.insert(values=pythonified) connection.execute(ins) parsed += 1 My two questions are: Is there a way to speed up the inserts within this basic framework? Maybe have a Queue of inserts and some insertion threads, some sort of bulk inserts, etc? When I used MySQL, for about ~1.2 million records the insert time was 15 minutes. With SQLite, the insert time was a little over an hour. Does that time difference between the db engines seem about right, or does it mean I am doing something very wrong?

    Read the article

  • SQL: find entries in 1:n relation that don't comply with condition spanning multiple rows

    - by milianw
    I'm trying to optimize SQL queries in Akonadi and came across the following problem that is apparently not easy to solve with SQL, at least for me: Assume the following table structure (should work in SQLite, PostgreSQL, MySQL): CREATE TABLE a ( a_id INT PRIMARY KEY ); INSERT INTO a (a_id) VALUES (1), (2), (3), (4); CREATE TABLE b ( b_id INT PRIMARY KEY, a_id INT, name VARCHAR(255) NOT NULL ); INSERT INTO b (b_id, a_id, name) VALUES (1, 1, 'foo'), (2, 1, 'bar'), (3, 1, 'asdf'), (4, 2, 'foo'), (5, 2, 'bar'), (6, 3, 'foo'); Now my problem is to find entries in a that are missing name entries in table b. E.g. I need to make sure each entry in a has at least the name entries "foo" and "bar" in table b. Hence the query should return something similar to: a_id = 3 is missing name "bar" a_id = 4 is missing name "foo" and "bar" Since both tables are potentially huge in Akonadi, performance is of utmost importance. One solution in MySQL would be: SELECT a.a_id, CONCAT('|', GROUP_CONCAT(name ORDER BY NAME ASC SEPARATOR '|'), '|') as names FROM a LEFT JOIN b USING( a_id ) GROUP BY a.a_id HAVING names IS NULL OR names NOT LIKE '%|bar|foo|%'; I have yet to measure the performance tomorrow, but severly doubt it's any fast for tens of thousand of entries in a and thrice as many in b. Furthermore we want to support SQLite and PostgreSQL where to my knowledge the GROUP_CONCAT function is not available. Thanks, good night.

    Read the article

  • SQL code to display counts() of value retrieved from another column

    - by Doctor Trout
    I have three tables (these are the relevant columns): Table1 bookingid, person, role Table2 bookingid, projectid Table3 projectid, project, numberofrole1, numberofrole2 Table1.role can take two values: "role1" or "role2". What I want to do is to show which projects don't have the correct number of roles in Table1. The number of roles there there should be for each role is in Table3. For example, if Table1 contains these three rows: bookingid, person, role 7, Tim, role1 7, Bob, role1, 7, Charles, role2 and Table2 bookingid, projectid 7, 1 and Table3 projectid, project, numberofrole1, numberofrole2 1, Test1, 2, 2 I would like the results to show that there are not the correct number of role2s for project Test1. To be honest, something like this is a bit beyond my ability, so I'm open to suggestions on the best way to do this. I'm using sqlite and php (it's only a small project). I suppose I could do something with the php at the end once I've got my results, but I wondered if there was a better way to do it with sqlite. I started by doing something like this: SELECT project, COUNT(numberofrole1) as "Role" FROM Table1 JOIN Table2 USING (projectid) JOIN Table3 USING (bookingid) WHERE role="role1" GROUP BY project But I can't work out how to compare the value returned as "Role" with the value got from numberofrole1 Any help is gratefully received.

    Read the article

  • PHP and use of the Num_Of_Rows() function?

    - by Michael Smith
    Below is some PHP code that i have written, the problem occurs when it gets to the use of the num_of_rows(), it just does not seem to work and i cant figure out why? <?php try { $divMon_ID = array(); $divMon_Position = array(); $divMon_Width = array(); $divMon_Div = array(); $db = new PDO('sqlite:db/EVENTS.sqlite'); $result_mon = $db->query('SELECT * FROM Monday'); $totalRows = mysql_num_rows($result_mon); //for($counter=1; $counter<=10; $counter+=1) //{ //<div id="event_1" style="position:absolute; left: 0px; top:-39px; width:100px; font-family:Arial, Helvetica, sans-serif; font-size:small; border:2px blue solid; height:93px"> //$divMon_ID[]=$row['Id']; //$divMon_Position[]=$row['Origin']; //$divMon_P[]=$row['Position']; //} } catch(PDOException $e) { print 'Exception : '.$e->getMessage(); } ? I know that it is the "$totalRows = mysql_num_rows($result_mon);" statement because when i then comment it out, the page can load. Am i using the function in the wrong way? Thanks.

    Read the article

  • SQL SERVER – 2008 – Introduction to Snapshot Database – Restore From Snapshot

    - by pinaldave
    Snapshot database is one of the most interesting concepts that I have used at some places recently. Here is a quick definition of the subject from Book On Line: A Database Snapshot is a read-only, static view of a database (the source database). Multiple snapshots can exist on a source database and can always reside on the same server instance as the database. Each database snapshot is consistent, in terms of transactions, with the source database as of the moment of the snapshot’s creation. A snapshot persists until it is explicitly dropped by the database owner. If you do not know how Snapshot database work, here is a quick note on the subject. However, please refer to the official description on Book-on-Line for accuracy. Snapshot database is a read-only database created from an original database called the “source database”. This database operates at page level. When Snapshot database is created, it is produced on sparse files; in fact, it does not occupy any space (or occupies very little space) in the Operating System. When any data page is modified in the source database, that data page is copied to Snapshot database, making the sparse file size increases. When an unmodified data page is read in the Snapshot database, it actually reads the pages of the original database. In other words, the changes that happen in the source database are reflected in the Snapshot database. Let us see a simple example of Snapshot. In the following exercise, we will do a few operations. Please note that this script is for demo purposes only- there are a few considerations of CPU, DISK I/O and memory, which will be discussed in the future posts. Create Snapshot Delete Data from Original DB Restore Data from Snapshot First, let us create the first Snapshot database and observe the sparse file details. USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO Now let us see the resultset for the same. Now let us do delete something from the Original DB and check the same details we checked before. -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO When we check the details of sparse file created by Snapshot database, we will find some interesting details. The details of Regular DB remain the same. It clearly shows that when we delete data from Regular/Source DB, it copies the data pages to Snapshot database. This is the reason why the size of the snapshot DB is increased. Now let us take this small exercise to  the next level and restore our deleted data from Snapshot DB to Original Source DB. -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Now let us check the details of the select statement and we can see that we are successful able to restore the database from Snapshot Database. We can clearly see that this is a very useful feature in case you would encounter a good business that needs it. I would like to request the readers to suggest more details if they are using this feature in their business. Also, let me know if you think it can be potentially used to achieve any tasks. Complete Script of the afore- mentioned operation for easy reference is as follows: USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Extending QuickBooks Reporting with the QuickBooks ADO.NET Data Provider

    - by dataintegration
    The ADO.NET Provider for QuickBooks comes with several reports you may request from QuickBooks by default. However, there are many more that are not readily available. The ADO.NET Provider for QuickBooks makes it easy for you to create new reports and customize existing ones. In this article, we will illustrate how to create your own report and retrieve it from the Server Explorer in Visual Studio. For this example we will show how to create an Item Profitability Report. Creating the report script file Step 1: Download the sample reports available here. Extract them to a folder of your choice. Step 2: Make a copy of the ReportGeneralSummary.rsd file and rename it to ItemProfitability.rsd. Then open the file in any text editor. Step 3: Open the installation directory of the ADO.NET Provider for QuickBooks. Under the \db\ folder, locate the ReportJob.rsb file. Open this file in another text editor. Note: Although we are using ReportJob.rsb for this example, other reports may be contained in other Report*.rsb files. We recommend consulting the included help file and first locating the Report stored procedure and ReportType you are looking for. Otherwise, you may open each Report*.rsb file and look under the "reporttype" input for the report you are attempting to create. Step 4: First, let's rename the title of ItemProfitability.rsd. Near the top of the file you will see a title and description. Change the title to match the name of the file. Change the description to anything you like. For example: <rsb:info title="ItemProfitability" description="Executes my custom report."> Just below the Title, there are a number of columns. The Id represents the row number. The RowType represents the type of data returned by QuickBooks. The ColumnValue* columns represent all of the column data returned by QuickBooks. In some instances, we may need to add additional ColumnValue columns. Step 5: To add additional ColumnValue columns, simply copy the last column, paste it directly below, and continue increasing the numerical value at end of the attribute name. For example: <attr name="ColumnValue9" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue10" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue11" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue12" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> ... Caution: Do not rename the ColumnValue* definitions themselves. They are generalized so that we can understand each type of report returned by QuickBooks. Renaming them to something other than ColumnValue* will cause your columns to return with null values. Step 6: Now let's update the available inputs for the table. From the ReportJob.rsb file, copy all of the input elements into ItemProfitability under the "Psuedo-Column definitions" comment. You will be replacing the existing input elements in ItemProfitability with inputs from ReportJob. When you are done, it should look like this: <!-- Psuedo-Column definitions --> <input name="reporttype" description="The type of the report." value="ITEMESTIMATESVSACTUALS,ITEMPROFITABILITY,JOBESTIMATESVSACTUALSDETAIL,JOBESTIMATESVSACTUALSSUMMARY,JOBPROFITABILITYDETAIL,JOBPROFITABILITYSUMMARY," default="ITEMESTIMATESVSACTUALS" /> <input name="reportperiod" description="Report date range in the format (fromdate:todate), and either value may be omitted for an open ended range (e.g. 2009-12-25:). Supported date format: yyyy-MM-dd." /> <input name="reportdaterangemacro" description="Use a predefined date range." value="ALL,TODAY,THISWEEK,THISWEEKTODATE,THISMONTH,THISMONTHTODATE,THISQUARTER,THISQUARTERTODATE,THISYEAR,THISYEARTODATE,YESTERDAY,LASTWEEK,LASTWEEKTODATE,LASTMONTH,LASTMONTHTODATE,LASTQUARTER,LASTQUARTERTODATE,LASTYEAR,LASTYEARTODATE,NEXTWEEK,NEXTFOURWEEKS,NEXTMONTH,NEXTQUARTER,NEXTYEAR," default="ALL" /> ... Step 7: Now let's update the operationname attribute. This needs to match the same operationname used by ReportJob. After you have copied the correct value from ReportJob.rsb, the operationname in ItemProfitability should look like so: <rsb:set attr="operationname" value="qbReportJob"/> Step 8: There is one more thing we can do to make this a true Item Profitability report. We can remove the reporttype input and hardcode the value. To do this, copy and paste the rsb:set used for operationname. Then rename the attr and value to match the name and value you want to use. For example: <rsb:set attr="operationname" value="qbReportJob"/> <rsb:set attr="reporttype" value="ITEMPROFITABILITY"/> After this you can remove the input for reporttype. Now that you have your own report file, we can move on to displaying the report in the Visual Studio server explorer. Accessing the report through the Data Provider Step 1: Open Visual Studio. In the Server Explorer, configure a new connection with the QuickBooks Data Provider. Step 2: For the Location connection string property, enter the directory where the new report has been saved to. Step 3: The new report should appear as a new view in the Server Explorer. Let's retrieve data from it. Step 4: You can specify any inputs in the WHERE clause. New Report Example Script To help you get started using this new QuickBooks Data Provider report, you will need to download the QuickBooks ADO.NET Data Provider and the fully functional sample script.

    Read the article

  • Customer Perspectives: Oracle Data Integrator

    - by Julien Testut
    Normal 0 false false false EN-US X-NONE X-NONE The Data Integration Product Management team will be hosting a customer panel session dedicated to Oracle Data Integrator at Oracle OpenWorld. I will have the pleasure to present this session with three of our customers: Paychex, Ross Stores and Turkcell. In this session, you will hear how Paychex, Ross Stores and Turkcell utilize Oracle Data Integrator to meet their IT and business needs. Our customers will be able to share with you how they use ODI in their environments, best practices, lessons learned and benefits of implementing Oracle Data Integrator. If you're interested in hearing more about how our customers use Oracle Data Integrator then I recommend attending this session: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Customer Perspectives: Oracle Data Integrator Wednesday October, 3rd, 1:15PM - 2:15PM Marriott Marquis – Golden Gate C3 v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} The Data Integration track at OpenWorld covers variety of topics and speakers. In addition to product management of Oracle GoldenGate, Oracle Data Integrator, and Enteprise Data Quality presenting product updates and roadmap, we have several customer panels and stand-alone sessions featuring select customers such as St. Jude Medical, Raymond James, Aderas, Turkcell, Paychex, Comcast, Ticketmaster, Bank of America and more. You can see an overview of Data Integration sessions here.  Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} If you are not able to attend OpenWorld, please check out our latest resources for Data Integration and Oracle GoldenGate. In the coming weeks you will see more blogs about our products’ new capabilities and what to expect at OpenWorld. We hope to see you at OpenWorld and stay in touch via our future blogs. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Configuring MySQL Cluster Data Nodes

    - by Mat Keep
    0 0 1 692 3948 Homework 32 9 4631 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} In my previous blog post, I discussed the enhanced performance and scalability delivered by extensions to the multi-threaded data nodes in MySQL Cluster 7.2. In this post, I’ll share best practices on the configuration of data nodes to achieve optimum performance on the latest generations of multi-core, multi-thread CPU designs. Configuring the Data Nodes The configuration of data node threads can be managed in two ways via the config.ini file: - Simply set MaxNoOfExecutionThreads to the appropriate number of threads to be run in the data node, based on the number of threads presented by the processors used in the host or VM. - Use the new ThreadConfig variable that enables users to configure both the number of each thread type to use and also which CPUs to bind them too. The flexible configuration afforded by the multi-threaded data node enhancements means that it is possible to optimise data nodes to use anything from a single CPU/thread up to a 48 CPU/thread server. Co-locating the MySQL Server with a single data node can fully utilize servers with 64 – 80 CPU/threads. It is also possible to co-locate multiple data nodes per server, but this is now only required for very large servers with 4+ CPU sockets dense multi-core processors. 24 Threads and Beyond! An example of how to make best use of a 24 CPU/thread server box is to configure the following: - 8 ldm threads - 4 tc threads - 3 recv threads - 3 send threads - 1 rep thread for asynchronous replication. Each of those threads should be bound to a CPU. It is possible to bind the main thread (schema management domain) and the IO threads to the same CPU in most installations. In the configuration above, we have bound threads to 20 different CPUs. We should also protect these 20 CPUs from interrupts by using the IRQBALANCE_BANNED_CPUS configuration variable in /etc/sysconfig/irqbalance and setting it to 0x0FFFFF. The reason for doing this is that MySQL Cluster generates a lot of interrupt and OS kernel processing, and so it is recommended to separate activity across CPUs to ensure conflicts with the MySQL Cluster threads are eliminated. When booting a Linux kernel it is also possible to provide an option isolcpus=0-19 in grub.conf. The result is that the Linux scheduler won't use these CPUs for any task. Only by using CPU affinity syscalls can a process be made to run on those CPUs. By using this approach, together with binding MySQL Cluster threads to specific CPUs and banning CPUs IRQ processing on these tasks, a very stable performance environment is created for a MySQL Cluster data node. On a 32 CPU/Thread server: - Increase the number of ldm threads to 12 - Increase tc threads to 6 - Provide 2 more CPUs for the OS and interrupts. - The number of send and receive threads should, in most cases, still be sufficient. On a 40 CPU/Thread server, increase ldm threads to 16, tc threads to 8 and increment send and receive threads to 4. On a 48 CPU/Thread server it is possible to optimize further by using: - 12 tc threads - 2 more CPUs for the OS and interrupts - Avoid using IO threads and main thread on same CPU - Add 1 more receive thread. Summary As both this and the previous post seek to demonstrate, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs. A big thanks to Mikael Ronstrom, Senior MySQL Architect at Oracle, for his work in developing these enhancements and best practices. You can download MySQL Cluster 7.2 today and try out all of these enhancements. The Getting Started guides are an invaluable aid to quickly building a Proof of Concept Don’t forget to check out the MySQL Cluster 7.2 New Features whitepaper to discover everything that is new in the latest GA release

    Read the article

  • Graphite SQLite3 DatabaseError: attempt to write a readonly database

    - by Anadi Misra
    Running graphite under apache httpd, with slqite database, I have the correct folder permissions [root@liaan55 httpd]# ls -ltr /var/lib | grep graphite drwxr-xr-x. 2 apache apache 4096 Aug 23 19:36 graphite-web and [root@liaan55 httpd]# ls -ltr /var/lib/graphite-web/ total 68 -rw-r--r--. 1 apache apache 65536 Aug 23 19:46 graphite.db syncdb also seems to have gone fine [root@liaan55 httpd]# sudo -su apache bash-4.1$ whoami apache bash-4.1$ python /usr/lib/python2.6/site-packages/graphite/manage.py syncdb /usr/lib/python2.6/site-packages/graphite/settings.py:231: UserWarning: SECRET_KEY is set to an unsafe default. This should be set in local_settings.py for better security warn('SECRET_KEY is set to an unsafe default. This should be set in local_settings.py for better security') /usr/lib/python2.6/site-packages/django/conf/__init__.py:75: DeprecationWarning: The ADMIN_MEDIA_PREFIX setting has been removed; use STATIC_URL instead. "use STATIC_URL instead.", DeprecationWarning) /usr/lib/python2.6/site-packages/django/core/cache/__init__.py:82: DeprecationWarning: settings.CACHE_* is deprecated; use settings.CACHES instead. DeprecationWarning Creating tables ... Creating table account_profile Creating table account_variable Creating table account_view Creating table account_window Creating table account_mygraph Creating table dashboard_dashboard_owners Creating table dashboard_dashboard Creating table events_event Creating table auth_permission Creating table auth_group_permissions Creating table auth_group Creating table auth_user_user_permissions Creating table auth_user_groups Creating table auth_user Creating table django_session Creating table django_admin_log Creating table django_content_type Creating table tagging_tag Creating table tagging_taggeditem You just installed Django's auth system, which means you don't have any superusers defined. Would you like to create one now? (yes/no): yes Username (leave blank to use 'apache'): root E-mail address: [email protected] Password: Password (again): Superuser created successfully. Installing custom SQL ... Installing indexes ... Installed 0 object(s) from 0 fixture(s) bash-4.1$ exit and the local-settings.py file is as follows STORAGE_DIR = '/var/lib/graphite-web' INDEX_FILE = '/var/lib/graphite-web/index' DATABASES = { 'default': { 'NAME': '/var/lib/graphite-web/graphite.db', 'ENGINE': 'django.db.backends.sqlite3', 'USER': '', 'PASSWORD': '', 'HOST': '', 'PORT': '' } } I still get this error [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] File "/usr/lib/python2.6/site-packages/django/db/backends/sqlite3/base.py", line 344, in execute [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] return Database.Cursor.execute(self, query, params) [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] DatabaseError: attempt to write a readonly database not sure what is missing in this configuration

    Read the article

  • How can I terminate a system command with alarm in Perl?

    - by rockyurock
    I am running the below code snippet on Windows. The server starts listening continuously after reading from client. I want to terminate this command after a time period. If I use alarm() function call within main.pl, then it terminates the whole Perl program (here main.pl), so I called this system command by placing it in a separate Perl file and calling this Perl file (alarm.pl) in the original Perl File using the system command. But in this way I was unable to take the output of this system() call neither in the original Perl File nor in called one Perl File. Could anybody please let me know the way to terminate a system() call or take the output in that way I used above? main.pl my @output = system("alarm.pl"); print"one iperf completed\n"; open FILE, ">display.txt" or die $!; print FILE @output_1; close FILE; alarm.pl alarm 30; my @output_1 = readpipe("adb shell cd /data/app; ./iperf -u -s -p 5001"); open FILE, ">display.txt" or die $!; print FILE @output_1; close FILE; In both ways display.txt is always empty.

    Read the article

  • AutoIncrement in SQLite with Subsonic 3

    - by Cooter
    This is probably a simple matter, but when I create a new object, the ID property starts off as 0 rather than null. As I understand it, SQLite takes/needs a value of null for the PK column to do the AutoIncrement. So the short question is how to get the ID in the object to start life as null? Thanks cooter

    Read the article

  • SELECT INTO statement in sqlite.

    - by monish
    HI Guys, Here I a wanna know that whether sqlite supports SELECT INTO statement. Actually I am trying to save the data in my table1 into table2 as a backup of my database before modifying the data. for that when I am using the SELECT INTO Statement a syntax error was generating as: My query as: SELECT * INTO equipments_backup FROM equipments; "Last Error Message:near "INTO":syntax error". Anyone's help will be appreciated. Thank you, Monish.

    Read the article

  • Why .data() function of jQuery is better to prevent memory leaks?

    - by burak ozdogan
    Hi, Regarding to jQuery utility function jQuery.data() the online documentation says: "The jQuery.data() method allows us to attach data of any type to DOM elements in a way that is safe from circular references and therefore from memory leaks. " Why to use: document.body.foo = 52; can result a memory leak -or in what conditions- so that I should use jQuery.data(document.body, 'foo', 52); Should I ALWAYS prefer .data() instead of using expandos in any case? (I would appreciate if you can provide an example to compare the differences) Thanks, burak ozdogan

    Read the article

  • Can I mark an Email as "High Importance" for Outlook using System.Net.Mail?

    - by ccornet
    Part of the application I'm working on for my client involves sending emails for events. Sometimes these are highly important. My client, and most of my client's clients, use Outlook, which has the ability to mark a mail message as High Importance. Now, I know it is callous to assume that all end users will be using the same interface, sp I am not. But considering you can send email from Outlook as High Importance even if the target is not necessarily reading through Outlook, that means that there is basically some data stored... somehow... that lets Outlook know if a particular message was assigned as High Importance. That's my interpretation, at least. The application currently uses System.Net.Mail to send out emails, using System.Net.Mail.MailMessages for writing them and System.Net.Mail.SmtpClient to send them. Is it possible to set this "High Importance" setting with System.Net.Mail's abilities? If not, is there any assembly available which can configure this setting?

    Read the article

  • Criteria query returns hydrated object in SQLite but not SqlServer

    - by Berryl
    I have a method that returns a resource fully hydrated when the db is SQLite but when the identical code is used by SqlServer the object is not fully hydrated. I'll explain that with the code after some brief background. I my domain various otherwise unrelated things like an Employee or a Machine can be used as a Resource that can be allocated to. In the object model an example of this would be: /// <summary>Wraps a <see cref="StaffMember"/> in a <see cref="ResourceBase"/>. </summary> public class StaffMemberResource : ResourceBase { public virtual StaffMember StaffMember { get; private set; } public StaffMemberResource(StaffMember staffMember) { Check.RequireNotNull<StaffMember>(staffMember); base.BusinessId = staffMember.Number.ToString(); base.Name = staffMember.Name.ToString(); base.OrganizationName = staffMember.Department.Name; StaffMember = staffMember; } [UsedImplicitly] protected StaffMemberResource() { } } And in the db tables, there is a table per class inheritance where the ResourceBase has a discriminator and the id of the actual resource (ie, StaffMember) StaffMember - 1 ---- M- ResourceBase - 1 ----- M - Allocation The Code public override StaffMemberResource BuildResource(IActivityService activityService) { var sessionFactory = _GetSessionFactory(); var session = sessionFactory.GetCurrentSession(); StaffMemberResource result; using (var tx = session.BeginTransaction()) { var propertyName = ExprHelper.GetPropertyName<StaffMember>(x => x.Number); var staff = session.CreateCriteria<StaffMember>() .Add(Restrictions.Eq(propertyName, new EmployeeNumber(_testData.Resource_1.BusinessId))) .UniqueResult<StaffMember>(); if (staff == null) { ... build up a staff member result = new StaffMemberResource(staff); } else { ////////// var property = ExprHelper.GetPropertyName<StaffMemberResource>(x => x.StaffMember); result = session.CreateCriteria<StaffMemberResource>() .Add(Restrictions.Eq(property, staff)) .UniqueResult<StaffMemberResource>(); } /////////// tx.Commit(); } return result; } It's that second criteria query that works "properly" with SQLite but not with SqlServer. By properly I mean that the employee numer is translated into a ResourceBase.BusinessId, Name is flattened out into a ResourceBase.Name, etc. Does anyone know why this might be? Cheers, Berryl

    Read the article

  • File copying utility like rsync with error handling like ddrescue, for data recovery from a hard drive with bad sectors or hardware failure

    - by purefusion
    I have a hard drive with either bad blocks or sectors that are failing to read due to potential mechanical issues, such as a bad disk head, bad motor, or some other issue that is causing the hard drive to read data excruciatingly slowly and with lots of read errors. I'm seeing an average of 50 KB/sec, with some reads dropping below 10 KB/sec, and frequently it gets stuck on a file or sector altogether, usually for quite a long time—from 2-10 minutes or more (when using rsync, before it times out). Speed seems to vary wildly, and it gets stuck on files a lot, and when it finally gets "unstuck" it only seems to last for a short burst before it gets stuck again. The drive is also very quiet with only an occasional sound of files copying (usually when it gets stuck/unstuck for a brief time, before getting stuck again). Thus, there are none of those evil sounds that are normally associated with HDD death. Someone suggested that the problems sounded like they might be caused by a misaligned disk head, which requires a lot of re-reads before it finally reads data with success. Sounds plausible, but I digress... Anyway, the problem with rsync is that it seems to have no decent error handling support. Obviously, it wasn't meant for use in recovering data from failing hard drives, but all the so-called "data recovery" utilities out there that are meant for such use usually focus on recovery of deleted files or messed up partitions, rather than copying files off dying hard drives. Deleted file recovery is not what I need, obviously, so perhaps you can understand my disappointment in not being able to find what I'm after yet. Naturally, this is where you'd probably say "You should use ddrescue!" Well, that's all fine and dandy, but I've already got most of the data backed up, so I just want to recover certain files. I'm not concerned with trying to recover a full partition block-by-block as ddrescue does. I am only interested in rescuing just specific files and directories. Ideally, what I'd like is some sort of cross between rsync and ddrescue: something that lets me specify source and destination as directories of normal files like rsync (rather than two full partitions as ddrescue requires), with a way to skip files with errors in an initial run, and then allows me to attempt recovery of those files with errors in a later run (with a slightly altered command, of course), perhaps even offering an option to specify the number of retry attempts ...just like how ddrescue works with blocks, only I want a utility that works with specific files/directories like rsync does. So am I daydreaming here, or does something out there exist that can do this? Or, maybe even a way to make rsync or ddrescue work in such a way? I'm really open to whatever solutions might work, so long as they let me choose which files I want to "rescue", and can skip files with errors in the initial run, and try/retry those errors again later. So far I've tried rsync with the following options, but it often gets stuck on a file for longer than the timeout, and ideally I'd just like it to move on to the next file and come back later to the files it gets stuck on. I don't think that's possible though. Anyway, here's what I've been using up till now: rsync -avP --stats --block-size=512 --timeout=600 /path/to/source/* /path/to/destination/

    Read the article

  • Using the LIMIT statement in a SQLite query

    - by anselmophil
    Hi guys. I have a query that selects rows in a ListView without having a limit. But now that i have implemented a SharedPreferences that the user can select how much rows will be displayed in the ListView, my SQLite query doesnt work. Im passing the argument this way: return wDb.query(TABELANOME, new String[] {IDTIT, TAREFATIT, SUMARIOTIT}, CONCLUIDOTIT + "=1", null, null, null, null, "LIMIT='" + limite + "'"); Help, please!

    Read the article

  • GCC how to block system calls within a program?

    - by CMPITG
    Does anyone tell me how to block system calls within a program, please? I am building a system which takes a piece of C source code, compiles it with gcc and runs it. For security reasons, I need to prevent the compiled program from calling system calls. Is there any way to do it, from the source code level (e.g. stripping the header files of gcc, detecting malicious external calls, ...) to the executable level?

    Read the article

  • login form with java/sqlite

    - by tuxou
    hi I would like to create a login form for my application with the possibility to add or remove users for an sqlite database, i have created the table users(nam, pass) but i can't unclud it in my login form, it someone could help me this is my login code: import java.awt.*; import java.awt.event.*; import javax.swing.*; public class login extends JFrame{ // Variables declaration private JLabel jLabel1; private JLabel jLabel2; private JTextField jTextField1; private JPasswordField jPasswordField1; private JButton jButton1; private JPanel contentPane; // End of variables declaration public login(){ super(); create(); this.setVisible(true); } private void create(){ jLabel1 = new JLabel(); jLabel2 = new JLabel(); jTextField1 = new JTextField(); jPasswordField1 = new JPasswordField(); jButton1 = new JButton(); contentPane = (JPanel)this.getContentPane(); // // jLabel1 // jLabel1.setHorizontalAlignment(SwingConstants.LEFT); jLabel1.setForeground(new Color(0, 0, 255)); jLabel1.setText("username:"); // // jLabel2 // jLabel2.setHorizontalAlignment(SwingConstants.LEFT); jLabel2.setForeground(new Color(0, 0, 255)); jLabel2.setText("password:"); // // jTextField1 // jTextField1.setForeground(new Color(0, 0, 255)); jTextField1.setSelectedTextColor(new Color(0, 0, 255)); jTextField1.setToolTipText("Enter your username"); jTextField1.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e){ jTextField1_actionPerformed(e); } }); // // jPasswordField1 // jPasswordField1.setForeground(new Color(0, 0, 255)); jPasswordField1.setToolTipText("Enter your password"); jPasswordField1.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e){ jPasswordField1_actionPerformed(e); } }); // // jButton1 // jButton1.setBackground(new Color(204, 204, 204)); jButton1.setForeground(new Color(0, 0, 255)); jButton1.setText("Login"); jButton1.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e){ jButton1_actionPerformed(e); } }); // // contentPane // contentPane.setLayout(null); contentPane.setBorder(BorderFactory.createEtchedBorder()); contentPane.setBackground(new Color(204, 204, 204)); addComponent(contentPane, jLabel1, 5,10,106,18); addComponent(contentPane, jLabel2, 5,47,97,18); addComponent(contentPane, jTextField1, 110,10,183,22); addComponent(contentPane, jPasswordField1, 110,45,183,22); addComponent(contentPane, jButton1, 150,75,83,28); // // login // this.setTitle("Login To Members Area"); this.setLocation(new Point(76, 182)); this.setSize(new Dimension(335, 141)); this.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE); this.setResizable(false); } /** Add Component Without a Layout Manager (Absolute Positioning) */ private void addComponent(Container container,Component c,int x,int y,int width,int height){ c.setBounds(x,y,width,height); container.add(c); } private void jTextField1_actionPerformed(ActionEvent e){ } private void jPasswordField1_actionPerformed(ActionEvent e){ } private void jButton1_actionPerformed(ActionEvent e){ System.out.println("\njButton1_actionPerformed(ActionEvent e) called."); String username = new String(jTextField1.getText()); String password = new String(jPasswordField1.getText()); if(username.equals("") || password.equals("")){// If password and username is empty > Do this >>> jButton1.setEnabled(false); JLabel errorFields = new JLabel("<HTML><FONT COLOR = Blue>You must enter a username and password to login.</FONT></HTML>"); JOptionPane.showMessageDialog(null,errorFields); jTextField1.setText(""); jPasswordField1.setText(""); jButton1.setEnabled(true); this.setVisible(true); } else{ JLabel optionLabel = new JLabel("<HTML><FONT COLOR = Blue>You entered</FONT><FONT COLOR = RED> <B>"+username+"</B></FONT> <FONT COLOR = Blue>as your username.<BR> Is this correct?</FONT></HTML>"); int confirm =JOptionPane.showConfirmDialog(null,optionLabel); switch(confirm){ // Switch > Case case JOptionPane.YES_OPTION: // Attempt to Login user jButton1.setEnabled(false); // Set button enable to false to prevent 2 login attempts break; case JOptionPane.NO_OPTION: // No Case.(Go back. Set text to 0) jButton1.setEnabled(false); jTextField1.setText(""); jPasswordField1.setText(""); jButton1.setEnabled(true); break; case JOptionPane.CANCEL_OPTION: // Cancel Case.(Go back. Set text to 0) jButton1.setEnabled(false); jTextField1.setText(""); jPasswordField1.setText(""); jButton1.setEnabled(true); break; } // End Switch > Case } } public static void main(String[] args){ JFrame.setDefaultLookAndFeelDecorated(true); JDialog.setDefaultLookAndFeelDecorated(true); try{ UIManager.setLookAndFeel("com.sun.java.swing.plaf.windows.WindowsLookAndFeel"); }catch (Exception ex){ System.out.println("Failed loading L&F: "); System.out.println(ex); } new login(); }; } my connectDb class : public class Connectdb { private static Connection connect; private static String url ="jdbc:sqlite:data.db"; private static Statement st; private static ResultSet rs; /** * Constructeur privé d'une connection à la bd unique */ private ConnectionBd(){ try { Class.forName("org.sqlite.JDBC"); connect = DriverManager.getConnection(url); } catch (ClassNotFoundException ex) { Logger.getLogger(ex.getName()).log(Level.SEVERE, null, ex); } catch (SQLException e) { System.exit(e.getErrorCode()); } } public static Connection getInstance(){ if(connect == null){ new Connectdb(); }else{ } return connect; } /** * @return */ public static void initTable(String query){ try { Statement state = getInstance().createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY); ResultSet res = state.executeQuery(query); res.close(); state.close(); } catch (SQLException e) { JOptionPane.showMessageDialog(null, e.getMessage(), "ERROR ! ", JOptionPane.ERROR_MESSAGE); } }

    Read the article

  • Connecting sqlite database

    - by user358171
    Ive seen tutorials after tutorials but i still dont get how to connect a database to my xcode.. i already put it in my reference, put the sqlite framework but still it doesnt work.. i even tried copying the whole code the tutorial offers but still my iphone simulator turns up blank.. can you please help me understand why?

    Read the article

  • Using mcrypt to pass data across a webservice is failing

    - by adam
    Hi I'm writing an error handler script which encrypts the error data (file, line, error, message etc) and passes the serialized array as a POST variable (using curl) to a script which then logs the error in a central db. I've tested my encrypt/decrypt functions in a single file and the data is encrypted and decrypted fine: define('KEY', 'abc'); define('CYPHER', 'blowfish'); define('MODE', 'cfb'); function encrypt($data) { $td = mcrypt_module_open(CYPHER, '', MODE, ''); $iv = mcrypt_create_iv(mcrypt_enc_get_iv_size($td), MCRYPT_RAND); mcrypt_generic_init($td, KEY, $iv); $crypttext = mcrypt_generic($td, $data); mcrypt_generic_deinit($td); return $iv.$crypttext; } function decrypt($data) { $td = mcrypt_module_open(CYPHER, '', MODE, ''); $ivsize = mcrypt_enc_get_iv_size($td); $iv = substr($data, 0, $ivsize); $data = substr($data, $ivsize); if ($iv) { mcrypt_generic_init($td, KEY, $iv); $data = mdecrypt_generic($td, $data); } return $data; } echo "<pre>"; $data = md5(''); echo "Data: $data\n"; $e = encrypt($data); echo "Encrypted: $e\n"; $d = decrypt($e); echo "Decrypted: $d\n"; Output: Data: d41d8cd98f00b204e9800998ecf8427e Encrypted: ê÷#¯KžViiÖŠŒÆÜ,ÑFÕUW£´Œt?†÷>c×åóéè+„N Decrypted: d41d8cd98f00b204e9800998ecf8427e The problem is, when I put the encrypt function in my transmit file (tx.php) and the decrypt in my recieve file (rx.php), the data is not fully decrypted (both files have the same set of constants for key, cypher and mode). Data before passing: a:4:{s:3:"err";i:1024;s:3:"msg";s:4:"Oops";s:4:"file";s:46:"/Applications/MAMP/htdocs/projects/txrx/tx.php";s:4:"line";i:80;} Data decrypted: Mª4:{s:3:"err";i:1024@7OYªç`^;g";s:4:"Oops";s:4:"file";sôÔ8F•Ópplications/MAMP/htdocs/projects/txrx/tx.php";s:4:"line";i:80;} Note the random characters in the middle. My curl is fairly simple: $ch = curl_init($url); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, 'data=' . $data); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); $output = curl_exec($ch); Things I suspect could be causing this: Encoding of the curl request Something to do with mcrypt padding missing bytes I've been staring at it too long and have missed something really really obvious If I turn off the crypt functions (so the transfer tx-rx is unencrypted) the data is received fine. Any and all help much appreciated! Thanks, Adam

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >