Search Results

Search found 56999 results on 2280 pages for 'object oriented database'.

Page 386/2280 | < Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >

  • My system administrator set up 2 databases that sync. Master-Master. However, these two databases a

    - by Alex
    DB1 and DB2. I made changes to DB1, and it does not seem to be on DB2. When I do "SHOW SLAVE STATUS\G" on DB2, there seems to be an error: mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: Master_User: Master_Port: Connect_Retry: 60 Master_Log_File: mysql-bin.0005496 Read_Master_Log_Pos: 5445649315 Relay_Log_File: mysqld-relay-bin.0041705 Relay_Log_Pos: 1624302119 Relay_Master_Log_File: mysql-bin.0004461 Slave_IO_Running: Yes Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 1062 Last_Error: Error 'Duplicate entry '4779' for key 1' on query. Default database: 'falc'. Query: 'INSERT INTO `log` (`anon_id`, `created_at`, `query`, `episode_url`, `detail_id`, `ip`) VALUES ('fdzn1d45kMavF4qbyePv', '2009-11-19 04:19:13', 'amazon', '', '', '130.126.40.57')' Skip_Counter: 0 Exec_Master_Log_Pos: 162301982 Relay_Log_Space: 136505187184 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL 1 row in set (0.00 sec) Then, I did show tables, and it seems like DB2 is lacking a table that I created on DB1...that means that for some reason, DB2 stopped syncing with DB1. How can I simply allow them to be in full synchronization again? All I want is DB2 to be exactly the same as DB1!

    Read the article

  • Solr startup script problem

    - by Camran
    I have installed solr and it works finally... I have now problems setting it up to start automatically with a start command. I have followed a tutorial and created a file called solr in the /etc/init.d/solr dir... Here is that file: #!/bin/sh -e # SOLR auto-start # # description: auto-starts solr engine # processname: solr-production # pidfile: /var/run/solr-production.pid NAME="solr" PIDFILE="/var/run/solr-production.pid" LOG_FILE="/var/log/solr-production.log" SOLR_DIR="/etc/jetty" JAVA_OPTIONS="-Xmx1024m -DSTOP.PORT=8079 -DSTOP.KEY=stopkey -jar start.jar" JAVA="/usr/bin/java" start() { echo -n "Starting $NAME... " if [ -f $PIDFILE ]; then echo "is already running!" else cd $SOLR_DIR $JAVA $JAVA_OPTIONS 2> $LOG_FILE & sleep 2 echo `ps -ef | grep -v grep | grep java | awk '{print $2}'` > $PIDFILE echo "(Done)" fi return 0 } stop() { echo -n "Stopping $NAME... " if [ -f $PIDFILE ]; then cd $SOLR_DIR $JAVA $JAVA_OPTIONS --stop sleep 2 rm $PIDFILE echo "(Done)" else echo "can not stop, it is not running!" fi return 0 } case "$1" in start) start ;; stop) stop ;; restart) stop sleep 5 start ;; *) echo "Usage: $0 (start | stop | restart)" exit 1 ;; esac Whenever I do solr -start I get this error: "Error occurred during initialization of VM Could not reserve enough space for object heap" I think this is because of the file above... Also here is where I have solr installed: var/www/solr and here is the start.jar file located: var/www/start.jar Help me out if you know whats causing this. Thanks BTW: OS is ubuntu 9.10

    Read the article

  • Move the uploads folder in Wordpress

    - by Victor Hurdugaci
    Currently, my Wordpress' upload folder is located in \wp-content\uploads. Initially there was no structure so all files were put there. After a while it was changed to upload the files in \wp-content\uploads\YEAR\MONTH. Now that folder contains a mix of files (those starting with + are folders) like: +wp-content | +2010 | | +02 | | | File-1 | | | File-2 | | | .. | | | File-n | | +01 | | | File-1 | | | File-2 | | | .. | | | File-n | +2009 | | +12 | | | File-1 | | | File-2 | | | .. | | | File-n | | +11 | | | File-1 | | | File-2 | | | .. | | | File-n | +.. | | | .. | Unstructured-file-1 | Unstructured-file-2 | ... | Unstructured-file-n Based on the dates of the unstructured files, I would like to move them in a structured hierarchy (based on date, move it to a folder \wp-content\uploads\YEAR\MONTH). Now, my questions are: Where do I write and execute a script to the movement (I don't have full access to the server, just to a cPanel and to the Wordpress Admin page)? What must be updated so that the links in posts, that reference the unstructured files, point to the new location of those files? Not fully related to the previous description: is it alright to move the whole uploads folder to another location, like \uploads? PS: Moving the files/updating the database manually is not an option :)

    Read the article

  • SQuirreL Client: Opening up a table in a separate tab

    - by Dalin Seivewright
    I've switched to SQuirrel Client to view an Oracle database with a variety of tables. At any given time, I need to look at the contents of several related tables at the same time. The problem is that, at least by default, SQuirrel Client does not open up multiple tables at a time. When you click on a different Table object, the main view is refreshed with newly selected table data. Oracle's SQLDeveloper (which I am trying to move away from) did this by default if I recall, but there was an option to "freeze" panes. Does a similar option exist in SQuirrel Client? I do not require the ability to view the contents of two different tables on the same screen (i.e. a split view) but I would like the ability to have tabs for each table so I can quickly switch between each table view rather then having to find the table in the table list over and over again. Note: I am using this in a professional capacity but if it does not "belong" here then I suppose it could be moved to superuser.

    Read the article

  • Spreadsheet application that can handle big data OS X

    - by Peter
    I've been working with Excel for quite a while for some statistical analysis that I do regularly. The size of the data that I'm working with has gotten much larger as of late, however. The layout of the databases in question is quite simple, usually just three rows which includes a UNIX timestamp, and EST value, a proprietary numeric value and finally an average of the rows that have a timestamp +/- 1000 that row's timestamp (little AVERAGEIFS() formula). That formula and the EST conversion are the only formulas in the sheet. I'm beginning to work with files with 500,000+ rows. Running the average formula down the entire row takes forever. The end result is the production of print-worthy graphs. I'm looking for either a UNIX CL utility or separate spreadsheet/database application that can handle this amount of data without melting my CPU or making me wait an hour. Is there anything out there? TL;DR: Simple excel sheet with over half a million rows is getting too slow to work with. OS X alternatives?

    Read the article

  • Mesh Networked servers via vpn

    - by microspino
    I got a design idea and I would like to have some advice from SF about It. I have 5 customers with small real-estate databases. I've built for them a desktop app and now they would like to merge their database to share their data. I don't want to centralize everything in one place nor I want to do maintenance for servers. They told me also, that all of them in their offices, have little servers and maintenance guys available. Although everything seems suitable for web application, I had the idea to experiment something new: Any customer small-server wild be connected to the others in a sort of mesh network without a single point of failure and through VPNs. If one of the servers went down the customers could still connect to their databases from one of the other mesh networked servers instead of from the local one that is down. During normal operations all the servers sync the db with the others through VPNs. I can accept a half-day timing window of NON synched data, in other words, since I don't need real time synchronization, the server don't have to always stay in synch. I can migrate my data over to other Non-Sql technologies like CouchDB or Redis or whatever you suggest. As you can see I don't have a lot of constraints and although I could go with a web application I would like to delegate and decentralize support, data-privacy and management, as more as I can to my customers offices. Is that a crazy idea? Do you know If something similar exist? Which technology would you suggest?

    Read the article

  • Strategy for Incremental Datasource fetchings in Excel

    - by user1352530
    I am in an scenario with a table that is refresh by a third app every week. I need to keep accumulating all data in Excel, using an ODBC connection to the database. I am wondering Approach 1: Is there a way to force Excel to append results for every update (this update would be triggered according to a parameter that indicates week)? I tried to define the table for which the connection loads using a dynamic reference but once is anchored first time, table position is never redefined Approach 2: Use an ETL to accumulate all weekly results into a staging table and then connect Excel to it in real time. But, I would need a mechanism for caching old data, as I cannot grow exponentially the time Excel opens. Imagine after 10 years, Excel would need to update at opening 10 years fo data before showing it. Is there a way to store already fetched data and increment it at real time (when book is opened) by selecting new data (with a query/filter of something) Thanks EDIT: Maybe it's better to ask it that way: What is the optimal strategy for a table that keeps growing and needs to be read in real time by Excel? I just don't want to fetch absolutely all data after some months...

    Read the article

  • How do I connect remotely to SQL Server from Windows client?

    - by humble_coder
    Hi All, Having a bit of an issue connecting to SQL SERVER remotely from Windows. I've verified that all of my settings are correct via SQL SERVER MANAGEMENT STUDIO EXPRESS and SQL SERVER CONFIGURATION MANAGER. I can connect remotely using ODBC drivers from other OSes (e.g. OS X, Linux, etc). However, when I connect with the same credentials from a remote Windows machine using "SQL SERVER" as the driver I am told that the system cannot connect. I've tried creating an ODBC Data Source and I get the same error: Connection failed: SQLState: '01000' SQL Server Error: 14 [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]ConnectionOpen(InvalidInstance()). Connection failed: SQLState: '08001' SQL Server Error: 14 [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]Invalid Connection From the non-windows machines I can use the IP address of the SQL Server just fine. However, on the remote Windows machine, neither IP address nor named instance works. FYI, I can create an ODBC Data Source using the named instance on the machine actually running the SQL Server (but this is, of course, nothing special -- just proof that it isn't completely hosed). One interesting note: If I use SQL STUDIO 2005 from a Windows client machine, I can use the IP address to connect remotely. Still, the whole reason I bring this up is because I need to use a software package I've written to connect to SQL Server remotely from Windows machines as well. Previously the solution was only needed to xfer data from SQL Server into a PostGRES or MySQL database on non-Windows machines (due to DBA preference). However, now they also want to move the data from the legacy software to MySQL even on Windows. Any assistance would be most appreciated. Feel free to provide a full example connection string. Best

    Read the article

  • MySQL stops accepting connections over 3306, still working on localhost

    - by Ben Dilts
    I have a MySQL database that stopped accepting connections from my web server altogether. So I SSH'ed into the server and started checking its vitals. The hard disks had plenty of open space, and there was plenty of available memory and swap space. Nothing was eating up the CPU (close to 100% idle). I even connected to MySQL locally and ran a few queries without any issues. But SHOW PROCESSLIST only showed my own connection, no others. Worst of all, in the MySQL log, no errors even remotely coincided with the unavailability of the server. On the web server, I got an error saying "Lost connection to MySQL server during query" at the moment the unavailability started, followed by a bunch of "MySQL server has gone away" errors. There's only one other application on the server that accepts network connections, and I killed that one (in case it was holding too many open connections or something), but it didn't help. Finally I just restarted the MySQL process, and everything is (for now) working again. What else should I check in these circumstances? Any idea what the problem might be? And how might I verify that is in fact the problem?

    Read the article

  • mysql - moving to a lower performance server, how small can I go?

    - by pedalpete
    I've been running a site for a few years now which really isn't growing in traffic, and I want to save some money on hosting, but keep it going for the loyal users of the site and api. The database has one a nearly 4 million row table, and on a 4gb dual xeon 5320 server. When I check server stats on this server with ps -aux, i get returns of mysql running at about 11% capacity, so no serious load. The main query against mysql runs in about 0.45 seconds. I popped over to linode.com to see what kind of performance I could get out of one of their tiny boxes, and their 360mb ram XEN vps returns the same query in 20 seconds. Clearly not good enough. I've looked at the mysql variables, and they are both very similar (I've included the show variables output below, if anybody is interested). Is there a good way to decide on what size server is needed based on what I'm coming from? Is it RAM that is likely making the difference with the large table size? Is there a way for me to figure out how much ram would be ideal?? Here's the output of the show variables (though I'm not sure it is important). +---------------------------------+------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/ | | bdb_cache_size | 8384512 | | bdb_home | /var/lib/mysql/ | | bdb_log_buffer_size | 262144 | | bdb_logdir | | | bdb_max_lock | 10000 | | bdb_shared_data | OFF | | bdb_tmpdir | /tmp/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/lib/mysql/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + - For some reason, that table formats properly in the preview, but apparently not when viewing the question. Hopefully it isn't needed anyway.

    Read the article

  • MySQL Tables Missing/Corrupt After Recreation

    - by Synetech inc.
    Hi, Yesterday I dumped my MySQL databases to an SQL file and renamed the ibdata1 file. I then recreated it and imported the SQL file and moved the new ibdata1 file to my MySQL data directory, deleting the old one. I’ve done it before without issue, however this time something is not right. When I examine the (personal, not MySQL config) databases, they are all there, but they are empty… sort of. The data directory still has the .ibd files with the correct content in them and I can view the table list in the databases, but not the tables themselves. (I have file-per-table enabled, and am using InnoDB as default for everything.) For example with the urls database and its urls table, I can successfully open mysql.exe or phpMyAdmin and use urls;. I can even show tables; to see the expected table, but then when I try to describe urls; or select * from urls;, it complains that the table does not exist (even though it just listed it). (The MySQL Administrator lists the databases, but does not even list the tables, it indicates that the dbs are completely empty.) The problem now is that I have already deleted the SQL file (and cannot recover it even after scouring my hard-drive). So I am trying to figure out a way to repair these databases/tables. I can’t use the table repair function since it complains that the table does not exist, and I can’t dump them because again, it complains that the tables don’t exist. Like I’ve said, the data itself is still present in the .ibd files and the table names are present. I just need a way to get MySQL to recognize that the tables exist in the databases (I can find the column names of the tables in question in the ibdata1 file using a hex-editor). Any idea how I can repair this type of corruption? I don’t mind rolling up my sleeves, digging in, and taking a bunch of steps to fix it. Thanks a lot.

    Read the article

  • mysqldump --where with = operator doesn't get all rows = - Help!

    - by JonathanLIVE
    I have a situation with a particular table that now thinks it contains 4 Petabytes of data. I know that sounds cool, but I assure you, it is only on a 60GB partition. This table has 9 fields in it. One of them is a domain_id field. It is the best field to identify the rows by, as there are only approximately 6300 of them. The only other field option to match has over 2million records, and thats just more difficult. I cannot do a straight mysqldump because it will attempt to output all 4PB of data and fill the drive long before it gets close to that, so I need to surgically remove the good stuff, destroy the db, and recreate it. I believe if I can do a dump for each domain_id record, then I will get most of the usable data out of it. This is what I am trying to use: mysqldump -u root --skip-opt -q --no-create-info --skip-add-drop-table --max_allowed_packet=1000000000 database table --where="domain_id=10" domains10.sql Using this I expect every row with the domain_id 10 to be exported. However, when I check the export, I am only getting 1 row, when however I look at the db, there are many many rows. It is as though the operator just finds one, then gives up. I have tried various operators. Using the < or I am able to get more of the data, but the export stops short at certain rows where the data has been compromised. With over 6000 to go through, I can't narrow down which rows are being affected in the export easily enough. So, what I need is an operator that will basically do what I thought = would do, simply give me an export of all records that match the specific field. Also note, the only way I got this DB even accessible is through an innodb force recovery 3. So I need to get this right, because after this is done, I have to drop the db in order to make mysql functional again. Looking forward to any helpful answers.

    Read the article

  • Understanding MySQL Query Caches and when to implement it?

    - by Jeff
    On our current MySQL server query cache is enabled. Qchache_hits: 31913 Qchache_inserts: 50959 Qchache_lowmem_prunes: 9320 Qchache_not_chached: 209320 Qchache_queries_in_chace: 986 com_update: 0 com_delete: 0 I do not fully understand the Query cache - I am reading about it currently and trying to understand it. Our database holds inventory data, customer data, employee data, sales data and so forth. The query is very rarely run more than once. The possibility of a query being run twice is viewing a specific sales information twice. But basically everything in our system changes constantly. It is always being updated, deleted, insterted and off the top of my head I can't picture users running the same query twice within a week. Do I even need to have the query cache enabled? I am guessing that the inserts means 51k entries have been added, but only 986 of those are being stored? Would an idea be to refresh the cache, and watch it for a week and check how many of the queries in cached are accessed maybe on a weekly basis to see if it is actually returning any benefits? Any help/guidance on this is appreciated, thanks

    Read the article

  • Removed Old Domain Trust. Now Progress (9.1D) can't open DB File

    - by RLH
    My company has an old server, running Progress 9.1D on a Windows 2000 VM, which was used by our company OS (Vantage 6 by Epicor.) Vantage was our primary OS for a very long time. About 2 years ago, we migrated to a larger, corporate OS and we cancelled our service contract with Epicor. Yesterday, we removed an AD trust between the corporate domain and our old AD domain we used in the days of Vantage. After restarting the virtual server, I have been able to start the ProService for 9.1D Windows service, however, I can not get Vantage to start back up. When I run the application, I get the error in the message listed below. Transcript: ** Could not connect to server for database [progress db file], errno 0. (1432) How can I fix this? FYI, I haven't had to work with Progress in years and even then I wouldn't have considered myself a "novice"-- I'm even less knowledgeable than that title would suggest. Vantage had a lot of internal tools and I recall that Epicor support managed to prevent .pf scripts from being executed. If there was a Progress specific patch that needed to be applied, you had to do it within the Vantage software OR they had to remote into the machine to fix this. I may not be able to run a .pf script but I do know that I can log into the console-based server application. (Yes, I can't even recall which utility that was called. It is sad.) It's been a long time and I never had to digg into Progress that much. Please help and feel free to ask questions. If you need more info, I'll update this post.

    Read the article

  • Mysql: Working With 192 Trillion Records... (Yes, 192 Trillion)

    - by Sarah
    Here's the question... Considering 192 trillion records, what should my considerations be? My main concern is speed. Here's the table... CREATE TABLE `ref` ( `id` INTEGER(13) AUTO_INCREMENT DEFAULT NOT NULL, `rel_id` INTEGER(13) NOT NULL, `p1` INTEGER(13) NOT NULL, `p2` INTEGER(13) DEFAULT NULL, `p3` INTEGER(13) DEFAULT NULL, `s` INTEGER(13) NOT NULL, `p4` INTEGER(13) DEFAULT NULL, `p5` INTEGER(13) DEFAULT NULL, `p6` INTEGER(13) DEFAULT NULL, PRIMARY KEY (`id`), KEY (`s`), KEY (`rel_id`), KEY (`p3`), KEY (`p4`) ); Here's the queries... SELECT id, s FROM ref WHERE red_id="$rel_id" AND p3="$p3" AND p4="$p4" SELECT rel_id, p1, p2, p3, p4, p5, p6 FROM ref WHERE id="$id" INSERT INTO rel (rel_id, p1, p2, p3, s, p4, p5, p6) VALUES ("$rel_id", "$p1", "$p2", "$p3", "$s", "$p4", "$p5", "$p6") Here's some notes... The SELECT's will be done much more frequently than the INSERT. However, occasionally I want to add a few hundred records at a time. Load-wise, there will be nothing for hours then maybe a few thousand queries all at once. Don't think I can normalize any more (need the p values in a combination) The database as a whole is very relational. This will be the largest table by far (next largest is about 900k) UPDATE (08/11/2010) Interestingly, I've been given a second option... Instead of 192 trillion I could store 2.6*10^16 (15 zeros, meaning 26 Quadrillion)... But in this second option I would only need to store one bigint(18) as the index in a table. That's it - just the one column. So I would just be checking for the existence of a value. Occasionally adding records, never deleting them. So that makes me think there must be a better solution then mysql for simply storing numbers... Given this second option, should I take it or stick with the first... [edit] Just got news of some testing that's been done - 100 million rows with this setup returns the query in 0.0004 seconds [/edit]

    Read the article

  • Rails Rake Error with XAMPP mysql database

    - by edu222
    I have installed XAAMP on my win7 machine and I have the apache server/mysql running on there. I set up rails to work with XAmpp as described here: XAMPP and RAILS This tutorial advises you to add this code to the XAMPP httpd.connf : Listen 3000 LoadModule rewrite_module modules/mod_rewrite.so ################################# # RUBY SETUP ################################# <virtualHost *:3000> ServerName rails DocumentRoot "c:/xampp/htdocs/FirstProject/public" <Directory "c:/xampp/htdocs/FirstProject/public/"> Options ExecCGI FollowSymLinks AllowOverride all Allow from all Order allow,deny AddHandler cgi-script .cgi AddHandler fastcgi-script .fcgi </Directory> </VirtualHost> ################################# # RUBY SETUP ################################# Xampp runs on the default localhost and mysql remains unchanged without a pw. I created a rails app with a mysql database like this: rails -d mysql C:/xampp/htdocs/FirstProject Then I started the ruby script/server from within the FirstProject location The localhost:3000/ shows the classic rails welcome I then ran a basic scaffold command: ruby script/generate scaffold FirstProject name:string email:string <br/> When I run the rake db:migrate command I get the following error: C:\xampp\htdocs\FirstProject>rake db:migrate --trace (in C:/xampp/htdocs/FirstProject) ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate rake aborted! undefined method `init' for Mysql:Class C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/mysql_adapter.rb:70:in `mysql_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:223:in `send' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:223:in `new_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:245:in `checkout_new_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:188:in `checkout' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:184:in `loop' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:184:in `checkout' C:/Ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:183:in `checkout' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:98:in `connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:326:in `retrieve_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_specification.rb:123:in `retrieve_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_specification.rb:115:in `connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb :435:in `initialize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb :400:in `new' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb :400:in `up' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb :383:in `migrate' C:/Ruby/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/tasks/databases.rake:116 C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `call' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in `invoke_with_call_c hain' C:/Ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_c hain' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:in `invoke' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:in `invoke_task' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exceptio n_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exceptio n_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 C:/Ruby/bin/rake:19:in `load' C:/Ruby/bin/rake:19 Any idea on how to fix this? Thanks in advance

    Read the article

  • migrating simple rails database to mysql

    - by joseph-misiti
    i am interested in creating a rails app with a mysql database. i am new to rails and am just trying to start creating something simple: rails -d mysql MyMoviesSQL cd MyMoviesSQL script/generate scaffold Movies title:string rating:integer rake db:migrate i am seeing the following error: rake aborted! NoMethodError: undefined method `ord' for 0:Fixnum: SET NAMES 'utf8' if i do a trace: ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate rake aborted! NoMethodError: undefined method ord' for 0:Fixnum: SET NAMES 'utf8' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract_adapter.rb:219:inlog' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:323:in execute' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:599:inconfigure_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:594:in connect' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:203:ininitialize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:75:in new' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:75:inmysql_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in send' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:innew_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:245:in checkout_new_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:188:incheckout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in loop' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:incheckout' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in synchronize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:183:incheckout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:98:in connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:326:inretrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:123:in retrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:115:inconnection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:435:in initialize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:400:innew' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:400:in up' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:383:inmigrate' /Library/Ruby/Gems/1.8/gems/rails-2.3.5/lib/tasks/databases.rake:116 /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in call' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:inexecute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:inexecute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in invoke_with_call_chain' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:insynchronize' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in invoke_with_call_chain' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:ininvoke' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:in invoke_task' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in standard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in run' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:instandard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in run' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/bin/rake:31 /usr/bin/rake:19:inload' /usr/bin/rake:19 here are my versions: rails - 2.3.5 ruby - 1.8.6 gem list * LOCAL GEMS * actionmailer (2.3.5, 1.3.6) actionpack (2.3.5, 1.13.6) actionwebservice (1.2.6) activerecord (2.3.5, 1.15.6) activeresource (2.3.5) activesupport (2.3.5, 1.4.4) acts_as_ferret (0.4.1) capistrano (2.0.0) cgi_multipart_eof_fix (2.5.0) daemons (1.0.9) dbi (0.4.3) deprecated (2.0.1) dnssd (0.6.0) fastthread (1.0.1) fcgi (0.8.7) ferret (0.11.4) gem_plugin (0.2.3) highline (1.2.9) hpricot (0.6) libxml-ruby (0.9.5, 0.3.8.4) mongrel (1.1.4) needle (1.3.0) net-sftp (1.1.0) net-ssh (1.1.2) rack (1.0.1) rails (2.3.5) rake (0.8.7, 0.7.3) RedCloth (3.0.4) ruby-openid (1.1.4) ruby-yadis (0.3.4) rubygems-update (1.3.6) rubynode (0.1.3) sqlite3-ruby (1.2.1) termios (0.9.4) also, if i need to add a patch to FixNum, can someone please tell which file to add the patch to. thanks for your help

    Read the article

  • MySQL Database Query Problem

    - by moustafa
    I need your help!!!. I need to query a table in my database that has record of goods sold. I want the query to detect a particular product and also calculate the quantity sold. The product are 300 now, but it would increase in the future. Below is a sample of my DB Table #---------------------------- # Table structure for litorder #---------------------------- CREATE TABLE `litorder` ( `id` int(10) NOT NULL auto_increment, `name` varchar(50) NOT NULL default '', `address` varchar(50) NOT NULL default '', `xdate` date NOT NULL default '0000-00-00', `ref` varchar(20) NOT NULL default '', `code1` varchar(50) NOT NULL default '', `code2` varchar(50) NOT NULL default '', `code3` varchar(50) NOT NULL default '', `code4` varchar(50) NOT NULL default '', `code5` varchar(50) NOT NULL default '', `code6` varchar(50) NOT NULL default '', `code7` varchar(50) NOT NULL default '', `code8` varchar(50) NOT NULL default '', `code9` varchar(50) NOT NULL default '', `code10` varchar(50) NOT NULL default '', `code11` varchar(50) character set latin1 collate latin1_bin NOT NULL default '', `code12` varchar(50) NOT NULL default '', `code13` varchar(50) NOT NULL default '', `code14` varchar(50) NOT NULL default '', `code15` varchar(50) NOT NULL default '', `product1` varchar(100) NOT NULL default '0', `product2` varchar(100) NOT NULL default '0', `product3` varchar(100) NOT NULL default '0', `product4` varchar(100) NOT NULL default '0', `product5` varchar(100) NOT NULL default '0', `product6` varchar(100) NOT NULL default '0', `product7` varchar(100) NOT NULL default '0', `product8` varchar(100) NOT NULL default '0', `product9` varchar(100) NOT NULL default '0', `product10` varchar(100) NOT NULL default '0', `product11` varchar(100) NOT NULL default '0', `product12` varchar(100) NOT NULL default '0', `product13` varchar(100) NOT NULL default '0', `product14` varchar(100) NOT NULL default '0', `product15` varchar(100) NOT NULL default '0', `price1` int(10) NOT NULL default '0', `price2` int(10) NOT NULL default '0', `price3` int(10) NOT NULL default '0', `price4` int(10) NOT NULL default '0', `price5` int(10) NOT NULL default '0', `price6` int(10) NOT NULL default '0', `price7` int(10) NOT NULL default '0', `price8` int(10) NOT NULL default '0', `price9` int(10) NOT NULL default '0', `price10` int(10) NOT NULL default '0', `price11` int(10) NOT NULL default '0', `price12` int(10) NOT NULL default '0', `price13` int(10) NOT NULL default '0', `price14` int(10) NOT NULL default '0', `price15` int(10) NOT NULL default '0', `quantity1` int(10) NOT NULL default '0', `quantity2` int(10) NOT NULL default '0', `quantity3` int(10) NOT NULL default '0', `quantity4` int(10) NOT NULL default '0', `quantity5` int(10) NOT NULL default '0', `quantity6` int(10) NOT NULL default '0', `quantity7` int(10) NOT NULL default '0', `quantity8` int(10) NOT NULL default '0', `quantity9` int(10) NOT NULL default '0', `quantity10` int(10) NOT NULL default '0', `quantity11` int(10) NOT NULL default '0', `quantity12` int(10) NOT NULL default '0', `quantity13` int(10) NOT NULL default '0', `quantity14` int(10) NOT NULL default '0', `quantity15` int(10) NOT NULL default '0', `amount1` int(10) NOT NULL default '0', `amount2` int(10) NOT NULL default '0', `amount3` int(10) NOT NULL default '0', `amount4` int(10) NOT NULL default '0', `amount5` int(10) NOT NULL default '0', `amount6` int(10) NOT NULL default '0', `amount7` int(10) NOT NULL default '0', `amount8` int(10) NOT NULL default '0', `amount9` int(10) NOT NULL default '0', `amount10` int(10) NOT NULL default '0', `amount11` int(10) NOT NULL default '0', `amount12` int(10) NOT NULL default '0', `amount13` int(10) NOT NULL default '0', `amount14` int(10) NOT NULL default '0', `amount15` int(10) NOT NULL default '0', `totalNaira` double(20,0) NOT NULL default '0', `totalDollar` int(20) NOT NULL default '0', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 COMMENT='InnoDB free: 4096 kB; InnoDB free: 4096 kB; InnoDB free: 409'; #---------------------------- # Records for table litorder #---------------------------- insert into litorder values (27, 'Sanyaolu Fisayo', '14 Adegboyega Street Palmgrove Lagos', '2010-05-31', '', 'DL 001', 'DL 002', 'DL 003', '', '', '', '', '', '', '', '', '', '', '', '', 'AILMENT & PREVENTION DVD- ENGLISH', 'AILMENT & PREVENTION DVD- HAUSA', 'BEAUTY CD', '', '', '', '', '', '', '', '', '', '', '', '', 800, 800, 3000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16, 16, 20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 12800, 12800, 60000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, '85600', 563), (28, 'Irenonse Esther', 'Lagos,Nigeria', '2010-06-01', '', 'DL 005', 'DL 008', 'FC 004', '', '', '', '', '', '', '', '', '', '', '', '', 'GET HEALTHY DVD', 'YOUR FUTURE DVD', 'FOREVER FACE CAP (YELLOW)', '', '', '', '', '', '', '', '', '', '', '', '', 1000, 900, 2000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2000, 1800, 6000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, '9800', 64), (29, 'Kalu Lekway', 'Lagos, Nigeria', '2010-06-01', '', 'DL 001', 'DL 003', '', '', '', '', '', '', '', '', '', '', '', '', '', 'AILMENT & PREVENTION DVD- ENGLISH', 'BEAUTY CD', '', '', '', '', '', '', '', '', '', '', '', '', '', 800, 3000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2400, 18000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, '20400', 133), (30, 'Dele', 'Ilupeju', '2010-06-02', '', 'DL 001', 'DL 003', '', '', '', '', '', '', '', '', '', '', '', '', '', 'AILMENT & PREVENTION DVD- ENGLISH', 'BEAUTY CD', '', '', '', '', '', '', '', '', '', '', '', '', '', 800, 3000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8000, 30000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, '38000', 250);

    Read the article

  • How to Access a descendant object's internal method in C#

    - by Giovanni Galbo
    I'm trying to access a method that is marked as internal in the parent class (in its own assembly) in an object that inherits from the same parent. Let me explain what I'm trying to do... I want to create Service classes that return IEnumberable with an underlying List to non-Service classes (e.g. the UI) and optionally return an IEnumerable with an underlying IQueryable to other services. I wrote some sample code to demonstrate what I'm trying to accomplish, shown below. The example is not real life, so please remember that when commenting. All services would inherit from something like this (only relevant code shown): public class ServiceBase<T> { protected readonly ObjectContext _context; protected string _setName = String.Empty; public ServiceBase(ObjectContext context) { _context = context; } public IEnumerable<T> GetAll() { return GetAll(false); } //These are not the correct access modifiers.. I want something //that is accessible to children classes AND between descendant classes internal protected IEnumerable<T> GetAll(bool returnQueryable) { var query = _context.CreateQuery<T>(GetSetName()); if(returnQueryable) { return query; } else { return query.ToList(); } } private string GetSetName() { //Some code... return _setName; } } Inherited services would look like this: public class EmployeeService : ServiceBase<Employees> { public EmployeeService(ObjectContext context) : base(context) { } } public class DepartmentService : ServiceBase<Departments> { private readonly EmployeeService _employeeService; public DepartmentService(ObjectContext context, EmployeeService employeeService) : base(context) { _employeeService = employeeService; } public IList<Departments> DoSomethingWithEmployees(string lastName) { //won't work because method with this signature is not visible to this class var emps = _employeeService.GetAll(true); //more code... } } Because the parent class lives is reusable, it would live in a different assembly than the child services. With GetAll(bool returnQueryable) being marked internal, the children would not be able to see each other's GetAll(bool) method, just the public GetAll() method. I know that I can add a new internal GetAll method to each service (or perhaps an intermediary parent class within the same assembly) so that each child service within the assembly can see each other's method; but it seems unnecessary since the functionality is already available in the parent class. For example: internal IEnumerable<Employees> GetAll(bool returnIQueryable) { return base.GetAll(returnIQueryable); } Essentially what I want is for services to be able to access other service methods as IQueryable so that they can further refine the uncommitted results, while everyone else gets plain old lists. Any ideas? EDIT You know what, I had some fun playing a little code golf with this... but ultimately I wouldn't be able to use this scheme anyway because I pass interfaces around, not classes. So in my example GetAll(bool returnIQueryable) would not be in the interface, meaning I'd have to do casting, which goes against what I'm trying to accomplish. I'm not sure if I had a brain fart or if I was just too excited trying to get something that I thought was neat to work. Either way, thanks for the responses.

    Read the article

  • Access violation when accessing a COM object from .Net

    - by Groo
    Dear Sirs, I am sorry if the post is too long, but I would be happy if someone would at least point read the bolded titles, and point me in the right direction. I am having this problem for couple of days, but was unable to found the answer on the net. These are the things I have found out so far. 1. "Access violation" exception crushes my managed application My C# WinForms app sometimes closes with an "Access violation" exception ("Attempted to read or write protected memory"), right in the moment when selecting a TabPage in a windows form TabControl. From the stack trace (try/catch around Application.Run) I can see that the exception happens at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg), called inside UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData). -- Message: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. -- Stack trace: at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager .System.Windows.Forms.UnsafeNativeMethods .IMsoComponentManager.FPushMessageLoop (Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext .RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext .RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(ApplicationContext context) at MyApp.Program.Main() 2. The faulting module seems to be a COM object (ChartFX Client Server 6.2) Using WinDbg (with SoS loaded), I caught it on the unmanaged side, inside ChartFX.ClientServer.Core.dll (that's a COM charting component we are using): (ca84.c98c): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=00000000 ebx=06e67c38 ecx=06e67c38 edx=000018c6 esi=06e7df30 edi=317a9e80 eip=31666110 esp=0015e040 ebp=0015e08c iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010246 ChartFX_ClientServer_Core!Ordinal5507+0x97b7: 31666110 8a404d mov al,byte ptr [eax+4Dh] ds:0023:0000004d=?? [edit:] I also wasn't able to get the unmamanged stack details from WinDbg (it said "Stack unwind info not available"): 0:000 kP ChildEBP RetAddr WARNING: Stack unwind information not available. Following frames may be wrong. 0015e08c 3166288b ChartFX_ClientServer_Core!Ordinal5507+0x97b7 0015e394 3165a921 ChartFX_ClientServer_Core!Ordinal5507+0x5f32 0015e480 31678685 ChartFX_ClientServer_Core!Ordinal5496+0x26a 0015e568 3167bef4 ChartFX_ClientServer_Core!Ordinal5492+0x975 0015e668 316a356b ChartFX_ClientServer_Core!Ordinal5492+0x41e4 0015e77c 31709496 ChartFX_ClientServer_Core!Ordinal443+0x5745 0015e7d0 31707f70 ChartFX_ClientServer_Core!Ordinal2584+0x3cdc 0015e7f8 3170817d ChartFX_ClientServer_Core!Ordinal2584+0x27b6 0015e81c 3162fd76 ChartFX_ClientServer_Core!Ordinal2584+0x29c3 0015e86c 7719f8d2 ChartFX_ClientServer_Core!Ordinal899+0x6b6 0015e898 7719f794 USER32!GetMessageW+0x93 0015e910 771a06f6 USER32!GetWindowLongW+0x115 0015e940 771a069c USER32!CallWindowProcW+0x75 0015e960 747fcef4 USER32!CallWindowProcW+0x1b 0015e97c 747fd073 comctl32!Ordinal377+0x5c 0015e9e0 747fd027 comctl32!DefSubclassProc+0x92 0015ea04 747fd4e6 comctl32!DefSubclassProc+0x46 0015ea20 747fd073 comctl32!DefSubclassProc+0x505 0015ea84 747fd118 comctl32!DefSubclassProc+0x92 0015eae4 7719f8d2 comctl32!DefSubclassProc+0x137 3. Bug is not easy to reproduce (although it can be provoked usually in less than 5 min.) I have several Chart instances in several TabPages, and this usually happens while I am switching the tabs. I still don't know how to reproduce it, besides switching those tabs for several minutes before it happens, so I cannot use our source control to reliably find the build which didn't have this problem. I am accessing the charts through the managed AxChart wrapper class (derived from AxHost), which was created by VS designer automatically. 4. What should be my next step? If someone could point me to the next step I should do to find the actual cause, I would be very grateful. Experimenting (removing and returning code) does not do much good, because I don't know how to reproduce it, so it would take large amounts of time on each iteration just to convince myself that the bug is still there. I have found that people often suggest something like "switching compiler optimizations", but since the exception is not thrown deterministically, I don't want to simply rearrange some bytes and hope that it never returns. Thanks a lot in advance! Best regards, Groo

    Read the article

  • "Invalid Handle Object" when plotting 2 figures Matlab

    - by pinnacler
    I'm having a difficult time understanding the paradigm of Matlab classes vs compared to c++. I wrote code the other day, and I thought it should work. It did not... until I added <handle after the classdef. So I have two classes, landmarks and robot, both are called from within the simulation class. This is the main loop of obj.simulation.animate() and it works, until I try to plot two things at once. DATA.path is a record of all the places a robot has been on the map, and it's updated every time the position is updated. When I try to plot it, by uncommenting the two marked lines below, I get this error: ??? Error using == set Invalid handle object. Error in == simulationsimulation.animate at 45 set(l.lm,'XData',obj.landmarks.apparentPositions(:,1),'YData',obj.landmarks.apparentPositions(:,2)); %INITIALIZE GLOBALS global DATA XX XX = [obj.robot.x ; obj.robot.y]; DATA.i=1; DATA.path = XX; %Setup Plots fig=figure; xlabel('meters'), ylabel('meters') set(fig, 'name', 'Phil''s AWESOME 80''s Robot Simulator') xymax = obj.landmarks.mapSize*3; xymin = -(obj.landmarks.mapSize*3); l.lm=scatter([0],[0],'b+'); %"UNCOMMENT ME"l.pth= plot(0,0,'k.','markersize',2,'erasemode','background'); % vehicle path axis([xymin xymax xymin xymax]); %Simulation Loop for n = 1:720, %Calculate and Set Heading/Location XX = [obj.robot.x;obj.robot.y]; store_data(XX); if n == 120, DATA.path end %Update Position headingChange = navigate(n); obj.robot.updatePosition(headingChange); obj.landmarks.updatePerspective(obj.robot.heading, obj.robot.x, obj.robot.y); %Animate %"UNCOMMENT ME" set(l.pth, 'xdata', DATA.path(1,1:DATA.i), 'ydata', DATA.path(2,1:DATA.i)); set(l.lm,'XData',obj.landmarks.apparentPositions(:,1),'YData',obj.landmarks.apparentPositions(:,2)); rectangle('Position',[-2,-2,4,4]); drawnow This is the classdef for landmarks classdef landmarks <handle properties fixedPositions; %# positions in a fixed coordinate system. [ x, y ] mapSize; %Map Size. Value is side of square x; y; heading; headingChange; end properties (Dependent) apparentPositions end methods function obj = landmarks(mapSize, numberOfTrees) obj.mapSize = mapSize; obj.fixedPositions = obj.mapSize * rand([numberOfTrees, 2]) .* sign(rand([numberOfTrees, 2]) - 0.5); end function apparent = get.apparentPositions(obj) currentPosition = [obj.x ; obj.y]; apparent = bsxfun(@minus,(obj.fixedPositions)',currentPosition)'; apparent = ([cosd(obj.heading) -sind(obj.heading) ; sind(obj.heading) cosd(obj.heading)] * (apparent)')'; end function updatePerspective(obj,tempHeading,tempX,tempY) obj.heading = tempHeading; obj.x = tempX; obj.y = tempY; end end end To me, this is how I understand things. I created a figure l.lm that has about 100 xy points. I can rotate this figure by using set(l.lm,'XData',obj.landmarks.apparentPositions(:,1),'YData',obj.landmarks.apparentPositions(:,2)); When I do that, things work. When I try to plot a second group of XY points, stored in DATA.path, it craps out and I can't figure out why.

    Read the article

  • Database connection timeout

    - by Clinton Bosch
    Hi I have read so many articles on the Internet about this problem but none seem to have a clear solution. Please could someone give me a definite answer as to why I am getting database timeouts. The app is a GWT app that is being hosted on a Tomcat 5.5 server. I use spring and the session factory is created in the applicationContext.xml as follows <bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">${connection.dialect}</prop> <prop key="hibernate.connection.username">${connection.username}</prop> <prop key="hibernate.connection.password">${connection.password}</prop> <prop key="hibernate.connection.url">${connection.url}</prop> <prop key="hibernate.connection.driver_class">${connection.driver.class}</prop> <prop key="hibernate.show_sql">${show.sql}</prop> <prop key="hibernate.hbm2ddl.auto">update</prop> <prop key="hibernate.c3p0.min_size">5</prop> <prop key="hibernate.c3p0.max_size">20</prop> <prop key="hibernate.c3p0.timeout">1800</prop> <prop key="hibernate.c3p0.max_statements">50</prop> <prop key="hibernate.c3p0.idle_test_period">300</prop> </props> </property> <property name="annotatedClasses"> <list> <value>za.co.xxxx.traintrack.server.model.Answer</value> <value>za.co.xxxx.traintrack.server.model.Company</value> <value>za.co.xxxx.traintrack.server.model.CompanyRegion</value> <value>za.co.xxxx.traintrack.server.model.Merchant</value> <value>za.co.xxxx.traintrack.server.model.Module</value> <value>za.co.xxxx.traintrack.server.model.Question</value> <value>za.co.xxxx.traintrack.server.model.User</value> <value>za.co.xxxx.traintrack.server.model.CompletedModule</value> </list> </property> </bean> <bean id="dao" class="za.co.xxxx.traintrack.server.DAO"> <property name="sessionFactory" ref="sessionFactory"/> <property name="adminUsername" value="${admin.user.name}"/> <property name="adminPassword" value="${admin.user.password}"/> <property name="godUsername" value="${god.user.name}"/> <property name="godPassword" value="${god.user.password}"/> </bean> All works fine untile the next day: INFO | jvm 1 | 2010/06/15 14:42:27 | 2010-06-15 18:42:27,804 WARN [JDBCExceptionReporter] : SQL Error: 0, SQLState: 08S01 INFO | jvm 1 | 2010/06/15 14:42:27 | 2010-06-15 18:42:27,821 ERROR [JDBCExceptionReporter] : The last packet successfully received from the server was 38729 seconds ago.The last packet sent successfully to the server was 38729 seconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. INFO | jvm 1 | 2010/06/15 14:42:27 | Jun 15, 2010 6:42:27 PM org.apache.catalina.core.ApplicationContext log INFO | jvm 1 | 2010/06/15 14:42:27 | SEVERE: Exception while dispatching incoming RPC call I have read so many different things (none of which worked), please help

    Read the article

  • Win32 -- Object cleanup and global variables

    - by KaiserJohaan
    Hello, I've got a question about global variables and object cleanup in c++. For example, look at the code here; case WM_PAINT: paintText(&hWnd); break; void paintText(HWND* hWnd) { PAINTSTRUCT ps; HBRUSH hbruzh = CreateSolidBrush(RGB(0,0,0)); HDC hdz = BeginPaint(*hWnd,&ps); char s1[] = "Name"; char s2[] = "IP"; SelectBrush(hdz,hbruzh); SelectFont(hdz,hFont); SetBkMode(hdz,TRANSPARENT); TextOut(hdz,3,23,s1,sizeof(s1)); TextOut(hdz,10,53,s2,sizeof(s2)); EndPaint(*hWnd,&ps); DeleteObject(hdz); DeleteObject(hbruzh); // bad? DeleteObject(ps); // bad? } 1)First of all; which objects are good to delete and which ones are NOT good to delete and why? Not 100% sure of this. 2)Since WM_PAINT is called everytime the window is redrawn, would it be better to simply store ps, hdz and hbruzh as global variables instead of re-initializing them everytime? The downside I guess would be tons of global variables in the end _ but performance-wise would it not be less CPU-consuming? I know it won't matter prolly but I'm just aiming for minimalistic as possible for educational purposes. 3) What about libraries that are loaded in? For example: // // Main // int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { // initialize vars HWND hWnd; WNDCLASSEX wc; HINSTANCE hlib = LoadLibrary("Riched20.dll"); ThishInstance = hInstance; ZeroMemory(&wc,sizeof(wc)); // set WNDCLASSEX props wc.cbSize = sizeof(WNDCLASSEX); wc.lpfnWndProc = WindowProc; wc.hInstance = ThishInstance; wc.hIcon = LoadIcon(hInstance,MAKEINTRESOURCE(IDI_MYICON)); wc.lpszMenuName = MAKEINTRESOURCE(IDR_MENU1); wc.hCursor = LoadCursor(NULL, IDC_ARROW); wc.hbrBackground = (HBRUSH)COLOR_WINDOW; wc.lpszClassName = TEXT("PimpClient"); RegisterClassEx(&wc); // create main window and display it hWnd = CreateWindowEx(NULL, wc.lpszClassName, TEXT("PimpClient"), 0, 300, 200, 450, 395, NULL, NULL, hInstance, NULL); createWindows(&hWnd); ShowWindow(hWnd,nCmdShow); // loop message queue MSG msg; while (GetMessage(&msg, NULL,0,0)) { TranslateMessage(&msg); DispatchMessage(&msg); } // cleanup? FreeLibrary(hlib); return msg.wParam; } 3cont) is there a reason to FreeLibrary at the end? I mean when the process terminates all resources are freed anyway? And since the library is used to paint text throughout the program, why would I want to free before that? Cheers

    Read the article

  • Var null and not an object when using document.getElementById

    - by Dean
    Hi, I'm doing some work in HTML and JQuery. I have a problem of my textarea and submit button not appearing after the radio button is selected. My HTML looks like this: <html> <head><title>Publications Database | Which spotlight for Publications</title> <script type="text/javascript" src="./jquery.js"></script> <script src="./addSpotlight.js" type="text/javascript" charset="utf-8"></script> </head> <body> <div class="wrapper"> <div class="header"> <div class="headerText">Which spotlight for Publications</div> </div> <div class="mainContent"> <p>Please select the publication that you would like to make the spotlight of this month:</p> <form action="addSpotlight" method="POST" id="form" name="form"> <div class="div29" id="div29"><input type="radio" value="29" name="publicationIDs" >A System For Dynamic Server Allocation in Application Server Clusters, IEEE International Symposium on Parallel and Distributed Processsing with Applications, 2008 </div> <div class="div30" id="div30"><input type="radio" value="30" name="publicationIDs" >Analysing BitTorrent's Seeding Strategies, 7th IEEE/IFIP International Conference on Embedded and Ubiquitous Computing (EUC-09), 2009 </div> <div class="div31" id="div31"><input type="radio" value="31" name="publicationIDs" >The Effect of Server Reallocation Time in Dynamic Resource Allocation, UK Performance Engineering Workshop 2009, 2009 </div> <div class="div32" id="div32"><input type="radio" value="32" name="publicationIDs" >idk, hello, 1992 </div> <div class="div33" id="div33"><input type="radio" value="33" name="publicationIDs" >sad, safg, 1992 </div> <div class="abstractWriteup" id="abstractWriteup"><textarea name="abstract"></textarea> <input type="submit" value="Add Spotlight"></div> </form> </div> </div> </body> </html> My javascript looks like this: $(document).ready( function() { $('.abstractWriteup').hide(); addAbstract(); }); function addAbstract() { var abstractWU = document.getElementById('.abstractWriteup'); $("input[name='publicationIDs']").change(function() { var abstractWU = document.getElementById('.abstractWriteup'); var classNameOfSelected = $("input[name='publicationIDs']").val(); var radioSelected = document.getElementById("div"+classNameOfSelected); var parentDiv = radioSelected.parentNode; parentDiv.insertBefore(radioSelected, abstractWU.nextSibling); $('.abstractWriteup').show(); });}; I have developed this by using Node#insertBefore. When I have had it working it has been rearranging the radio buttons. Thanks in Advance Dean

    Read the article

  • function fetch() on a non-object problem

    - by shin
    I have this url, http://webworks.net/ww.incs/forgotten-password-verification.php?verification_code=974bf747124c69f12ae3b36afcaccc68&[email protected]&redirect=/ww.admin/index.php And this gives the following error. Fatal error: Call to a member function fetch() on a non-object in /var/www/webworks/ww.incs/basics.php on line 23 Call Stack: 0.0005 338372 1. {main}() /var/www/webworks/ww.incs/forgotten-password-verification.php: 0 0.0020 363796 2. dbRow() /var/www/webworks/ww.incs/forgotten-password-verification.php:18 The forgotten-password-verification.php require 'login-libs.php'; login_check_is_email_provided(); // check that a verification code was provided if( !isset($_REQUEST['verification_code']) || $_REQUEST['verification_code']=='' ){ login_redirect($url,'novalidation'); } // check that the email/verification code combination matches a row in the user table // $password=md5($_REQUEST['email'].'|'.$_REQUEST['password']); $r=dbRow('select * from user_accounts where email="'.addslashes($_REQUEST['email']).'" and verification_code="'.$_REQUEST['verification_code'].'" and active' ); if($r==false){ login_redirect($url,'validationfailed'); } // success! set the session variable, then redirect $_SESSION['userdata']=$r; $groups=json_decode($r['groups']); $_SESSION['userdata']['groups']=array(); foreach($groups as $g)$_SESSION['userdata']['groups'][$g]=true; if($r['extras']=='')$r['extras']='[]'; $_SESSION['userdata']['extras']=json_decode($r['extras']); login_redirect($url); And login-libs, require 'basics.php'; $url='/'; $err=0; function login_redirect($url,$msg='success'){ if($msg)$url.='?login_msg='.$msg; header('Location: '.$url); echo '<a href="'.htmlspecialchars($url).'">redirect</a>'; exit; } // set up the redirect if(isset($_REQUEST['redirect'])){ $url=preg_replace('/[\?\&].*/','',$_REQUEST['redirect']); if($url=='')$url='/'; } // check that the email address is provided and valid function login_check_is_email_provided(){ if( !isset($_REQUEST['email']) || $_REQUEST['email']=='' || !filter_var($_REQUEST['email'], FILTER_VALIDATE_EMAIL) ){ login_redirect($GLOBALS['url'],'noemail'); } } // check that the captcha is provided function login_check_is_captcha_provided(){ if( !isset($_REQUEST["recaptcha_challenge_field"]) || $_REQUEST["recaptcha_challenge_field"]=='' || !isset($_REQUEST["recaptcha_response_field"]) || $_REQUEST["recaptcha_response_field"]=='' ){ login_redirect($GLOBALS['url'],'nocaptcha'); } } // check that the captcha is valid function login_check_is_captcha_valid(){ require 'recaptcha.php'; $resp=recaptcha_check_answer( RECAPTCHA_PRIVATE, $_SERVER["REMOTE_ADDR"], $_REQUEST["recaptcha_challenge_field"], $_REQUEST["recaptcha_response_field"] ); if(!$resp->is_valid){ login_redirect($GLOBALS['url'],'invalidcaptcha'); } } basics.php is, session_start(); function __autoload($name) { require $name . '.php'; } function dbInit(){ if(isset($GLOBALS['db']))return $GLOBALS['db']; global $DBVARS; $db=new PDO('mysql:host='.$DBVARS['hostname'].';dbname='.$DBVARS['db_name'],$DBVARS['username'],$DBVARS['password']); $db->query('SET NAMES utf8'); $db->num_queries=0; $GLOBALS['db']=$db; return $db; } function dbQuery($query){ $db=dbInit(); $q=$db->query($query); $db->num_queries++; return $q; } function dbRow($query) { $q = dbQuery($query); return $q->fetch(PDO::FETCH_ASSOC); } define('SCRIPTBASE', $_SERVER['DOCUMENT_ROOT'] . '/'); require SCRIPTBASE . '.private/config.php'; if(!defined('CONFIG_FILE'))define('CONFIG_FILE',SCRIPTBASE.'.private/config.php'); set_include_path(SCRIPTBASE.'ww.php_classes'.PATH_SEPARATOR.get_include_path()); I am not sure how to solve the problem. Any help will be appreciated. Thanks in advance. UPDATE: My db CREATE TABLE IF NOT EXISTS `user_accounts` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `email` text, `password` char(32) DEFAULT NULL, `active` tinyint(4) DEFAULT '0', `groups` text, `activation_key` varchar(32) DEFAULT NULL, `extras` text, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=10 ; INSERT INTO `user_accounts` (`id`, `email`, `password`, `active`, `groups`, `activation_key`, `extras`) VALUES (2, '[email protected]', '6d24dde9d56b9eab99a303a713df2891', 1, '["_superadministrators"]', '5d50e39420127d0bab44a56612f2d89b', NULL), (3, '[email protected]', 'e83052ab33df32b94da18f6ff2353e94', 1, '[]', NULL, NULL), (9, '[email protected]', '9ca3eee3c43384a575eb746eeae0f279', 1, '["_superadministrators"]', '974bf747124c69f12ae3b36afcaccc68', NULL);

    Read the article

< Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >