Search Results

Search found 53087 results on 2124 pages for 'database development'.

Page 393/2124 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • Replication keeps popping up on SharePoint databases

    - by Ddono25
    My typical discovery scenario: We receive an alert that the transaction log is growing quickly. We are in Simple Recovery so I go to check it out. Log is already sized to 100GB and is at 80% capacity. I run the "Whats using my log files" script from SQL Server Central and see that Replication is enabled on the database. We do not set up replication, and I don't think Replication can be done on SharePoint content db's as Replication is not supported (requires PK on all tables). This has been occurring on random servers (about 5 so far, all within the past three weeks) and it only occurs on Content Databases. sp_removedbreplication does not always work in removing the Replication either. We have found that we need to run the sp_removedbreplication, change all db owners to SA and reset Recovery Mode to Simple to completely eradicate any vestiges of this bug. How would Replication be enabling itself? We have never set up Replication on these servers. There is no evidence of any type of Replication other than the 'log_reuse_wait_desc' from the DMV query and log growth. Any help on this ghost would be appreciated!

    Read the article

  • postgresql deleteing old records from log tables

    - by Max
    I have a postgresql database which stores my radius connection information. What I want to do is only store a months worth of logs. How would I craft a sql statement that I can run from cron that would go and delete and rows that where older then a month? Format of the date in the table. that date is taken from acctstoptime collum Date format 2010-01-27 16:02:17-05 Format of the table in question. -- Table: radacct CREATE TABLE radacct ( radacctid bigserial NOT NULL, acctsessionid character varying(32) NOT NULL, acctuniqueid character varying(32) NOT NULL, username character varying(253), groupname character varying(253), realm character varying(64), nasipaddress inet NOT NULL, nasportid character varying(15), nasporttype character varying(32), acctstarttime timestamp with time zone, acctstoptime timestamp with time zone, acctsessiontime bigint, acctauthentic character varying(32), connectinfo_start character varying(50), connectinfo_stop character varying(50), acctinputoctets bigint, acctoutputoctets bigint, calledstationid character varying(50), callingstationid character varying(50), acctterminatecause character varying(32), servicetype character varying(32), xascendsessionsvrkey character varying(10), framedprotocol character varying(32), framedipaddress inet, acctstartdelay integer, acctstopdelay integer, freesidestatus character varying(32), CONSTRAINT radacct_pkey PRIMARY KEY (radacctid) ) WITH (OIDS=FALSE); ALTER TABLE radacct OWNER TO radius; -- Index: freesidestatus CREATE INDEX freesidestatus ON radacct USING btree (freesidestatus); -- Index: radacct_active_user_idx CREATE INDEX radacct_active_user_idx ON radacct USING btree (username, nasipaddress, acctsessionid) WHERE acctstoptime IS NULL; -- Index: radacct_start_user_idx CREATE INDEX radacct_start_user_idx ON radacct USING btree (acctstarttime, username);

    Read the article

  • My system administrator set up 2 databases that sync. Master-Master. However, these two databases a

    - by Alex
    DB1 and DB2. I made changes to DB1, and it does not seem to be on DB2. When I do "SHOW SLAVE STATUS\G" on DB2, there seems to be an error: mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: Master_User: Master_Port: Connect_Retry: 60 Master_Log_File: mysql-bin.0005496 Read_Master_Log_Pos: 5445649315 Relay_Log_File: mysqld-relay-bin.0041705 Relay_Log_Pos: 1624302119 Relay_Master_Log_File: mysql-bin.0004461 Slave_IO_Running: Yes Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 1062 Last_Error: Error 'Duplicate entry '4779' for key 1' on query. Default database: 'falc'. Query: 'INSERT INTO `log` (`anon_id`, `created_at`, `query`, `episode_url`, `detail_id`, `ip`) VALUES ('fdzn1d45kMavF4qbyePv', '2009-11-19 04:19:13', 'amazon', '', '', '130.126.40.57')' Skip_Counter: 0 Exec_Master_Log_Pos: 162301982 Relay_Log_Space: 136505187184 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL 1 row in set (0.00 sec) Then, I did show tables, and it seems like DB2 is lacking a table that I created on DB1...that means that for some reason, DB2 stopped syncing with DB1. How can I simply allow them to be in full synchronization again? All I want is DB2 to be exactly the same as DB1!

    Read the article

  • Move the uploads folder in Wordpress

    - by Victor Hurdugaci
    Currently, my Wordpress' upload folder is located in \wp-content\uploads. Initially there was no structure so all files were put there. After a while it was changed to upload the files in \wp-content\uploads\YEAR\MONTH. Now that folder contains a mix of files (those starting with + are folders) like: +wp-content | +2010 | | +02 | | | File-1 | | | File-2 | | | .. | | | File-n | | +01 | | | File-1 | | | File-2 | | | .. | | | File-n | +2009 | | +12 | | | File-1 | | | File-2 | | | .. | | | File-n | | +11 | | | File-1 | | | File-2 | | | .. | | | File-n | +.. | | | .. | Unstructured-file-1 | Unstructured-file-2 | ... | Unstructured-file-n Based on the dates of the unstructured files, I would like to move them in a structured hierarchy (based on date, move it to a folder \wp-content\uploads\YEAR\MONTH). Now, my questions are: Where do I write and execute a script to the movement (I don't have full access to the server, just to a cPanel and to the Wordpress Admin page)? What must be updated so that the links in posts, that reference the unstructured files, point to the new location of those files? Not fully related to the previous description: is it alright to move the whole uploads folder to another location, like \uploads? PS: Moving the files/updating the database manually is not an option :)

    Read the article

  • Spreadsheet application that can handle big data OS X

    - by Peter
    I've been working with Excel for quite a while for some statistical analysis that I do regularly. The size of the data that I'm working with has gotten much larger as of late, however. The layout of the databases in question is quite simple, usually just three rows which includes a UNIX timestamp, and EST value, a proprietary numeric value and finally an average of the rows that have a timestamp +/- 1000 that row's timestamp (little AVERAGEIFS() formula). That formula and the EST conversion are the only formulas in the sheet. I'm beginning to work with files with 500,000+ rows. Running the average formula down the entire row takes forever. The end result is the production of print-worthy graphs. I'm looking for either a UNIX CL utility or separate spreadsheet/database application that can handle this amount of data without melting my CPU or making me wait an hour. Is there anything out there? TL;DR: Simple excel sheet with over half a million rows is getting too slow to work with. OS X alternatives?

    Read the article

  • SQuirreL Client: Opening up a table in a separate tab

    - by Dalin Seivewright
    I've switched to SQuirrel Client to view an Oracle database with a variety of tables. At any given time, I need to look at the contents of several related tables at the same time. The problem is that, at least by default, SQuirrel Client does not open up multiple tables at a time. When you click on a different Table object, the main view is refreshed with newly selected table data. Oracle's SQLDeveloper (which I am trying to move away from) did this by default if I recall, but there was an option to "freeze" panes. Does a similar option exist in SQuirrel Client? I do not require the ability to view the contents of two different tables on the same screen (i.e. a split view) but I would like the ability to have tabs for each table so I can quickly switch between each table view rather then having to find the table in the table list over and over again. Note: I am using this in a professional capacity but if it does not "belong" here then I suppose it could be moved to superuser.

    Read the article

  • Mesh Networked servers via vpn

    - by microspino
    I got a design idea and I would like to have some advice from SF about It. I have 5 customers with small real-estate databases. I've built for them a desktop app and now they would like to merge their database to share their data. I don't want to centralize everything in one place nor I want to do maintenance for servers. They told me also, that all of them in their offices, have little servers and maintenance guys available. Although everything seems suitable for web application, I had the idea to experiment something new: Any customer small-server wild be connected to the others in a sort of mesh network without a single point of failure and through VPNs. If one of the servers went down the customers could still connect to their databases from one of the other mesh networked servers instead of from the local one that is down. During normal operations all the servers sync the db with the others through VPNs. I can accept a half-day timing window of NON synched data, in other words, since I don't need real time synchronization, the server don't have to always stay in synch. I can migrate my data over to other Non-Sql technologies like CouchDB or Redis or whatever you suggest. As you can see I don't have a lot of constraints and although I could go with a web application I would like to delegate and decentralize support, data-privacy and management, as more as I can to my customers offices. Is that a crazy idea? Do you know If something similar exist? Which technology would you suggest?

    Read the article

  • Strategy for Incremental Datasource fetchings in Excel

    - by user1352530
    I am in an scenario with a table that is refresh by a third app every week. I need to keep accumulating all data in Excel, using an ODBC connection to the database. I am wondering Approach 1: Is there a way to force Excel to append results for every update (this update would be triggered according to a parameter that indicates week)? I tried to define the table for which the connection loads using a dynamic reference but once is anchored first time, table position is never redefined Approach 2: Use an ETL to accumulate all weekly results into a staging table and then connect Excel to it in real time. But, I would need a mechanism for caching old data, as I cannot grow exponentially the time Excel opens. Imagine after 10 years, Excel would need to update at opening 10 years fo data before showing it. Is there a way to store already fetched data and increment it at real time (when book is opened) by selecting new data (with a query/filter of something) Thanks EDIT: Maybe it's better to ask it that way: What is the optimal strategy for a table that keeps growing and needs to be read in real time by Excel? I just don't want to fetch absolutely all data after some months...

    Read the article

  • How do I connect remotely to SQL Server from Windows client?

    - by humble_coder
    Hi All, Having a bit of an issue connecting to SQL SERVER remotely from Windows. I've verified that all of my settings are correct via SQL SERVER MANAGEMENT STUDIO EXPRESS and SQL SERVER CONFIGURATION MANAGER. I can connect remotely using ODBC drivers from other OSes (e.g. OS X, Linux, etc). However, when I connect with the same credentials from a remote Windows machine using "SQL SERVER" as the driver I am told that the system cannot connect. I've tried creating an ODBC Data Source and I get the same error: Connection failed: SQLState: '01000' SQL Server Error: 14 [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]ConnectionOpen(InvalidInstance()). Connection failed: SQLState: '08001' SQL Server Error: 14 [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]Invalid Connection From the non-windows machines I can use the IP address of the SQL Server just fine. However, on the remote Windows machine, neither IP address nor named instance works. FYI, I can create an ODBC Data Source using the named instance on the machine actually running the SQL Server (but this is, of course, nothing special -- just proof that it isn't completely hosed). One interesting note: If I use SQL STUDIO 2005 from a Windows client machine, I can use the IP address to connect remotely. Still, the whole reason I bring this up is because I need to use a software package I've written to connect to SQL Server remotely from Windows machines as well. Previously the solution was only needed to xfer data from SQL Server into a PostGRES or MySQL database on non-Windows machines (due to DBA preference). However, now they also want to move the data from the legacy software to MySQL even on Windows. Any assistance would be most appreciated. Feel free to provide a full example connection string. Best

    Read the article

  • MySQL stops accepting connections over 3306, still working on localhost

    - by Ben Dilts
    I have a MySQL database that stopped accepting connections from my web server altogether. So I SSH'ed into the server and started checking its vitals. The hard disks had plenty of open space, and there was plenty of available memory and swap space. Nothing was eating up the CPU (close to 100% idle). I even connected to MySQL locally and ran a few queries without any issues. But SHOW PROCESSLIST only showed my own connection, no others. Worst of all, in the MySQL log, no errors even remotely coincided with the unavailability of the server. On the web server, I got an error saying "Lost connection to MySQL server during query" at the moment the unavailability started, followed by a bunch of "MySQL server has gone away" errors. There's only one other application on the server that accepts network connections, and I killed that one (in case it was holding too many open connections or something), but it didn't help. Finally I just restarted the MySQL process, and everything is (for now) working again. What else should I check in these circumstances? Any idea what the problem might be? And how might I verify that is in fact the problem?

    Read the article

  • mysql - moving to a lower performance server, how small can I go?

    - by pedalpete
    I've been running a site for a few years now which really isn't growing in traffic, and I want to save some money on hosting, but keep it going for the loyal users of the site and api. The database has one a nearly 4 million row table, and on a 4gb dual xeon 5320 server. When I check server stats on this server with ps -aux, i get returns of mysql running at about 11% capacity, so no serious load. The main query against mysql runs in about 0.45 seconds. I popped over to linode.com to see what kind of performance I could get out of one of their tiny boxes, and their 360mb ram XEN vps returns the same query in 20 seconds. Clearly not good enough. I've looked at the mysql variables, and they are both very similar (I've included the show variables output below, if anybody is interested). Is there a good way to decide on what size server is needed based on what I'm coming from? Is it RAM that is likely making the difference with the large table size? Is there a way for me to figure out how much ram would be ideal?? Here's the output of the show variables (though I'm not sure it is important). +---------------------------------+------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/ | | bdb_cache_size | 8384512 | | bdb_home | /var/lib/mysql/ | | bdb_log_buffer_size | 262144 | | bdb_logdir | | | bdb_max_lock | 10000 | | bdb_shared_data | OFF | | bdb_tmpdir | /tmp/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/lib/mysql/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + - For some reason, that table formats properly in the preview, but apparently not when viewing the question. Hopefully it isn't needed anyway.

    Read the article

  • MySQL Tables Missing/Corrupt After Recreation

    - by Synetech inc.
    Hi, Yesterday I dumped my MySQL databases to an SQL file and renamed the ibdata1 file. I then recreated it and imported the SQL file and moved the new ibdata1 file to my MySQL data directory, deleting the old one. I’ve done it before without issue, however this time something is not right. When I examine the (personal, not MySQL config) databases, they are all there, but they are empty… sort of. The data directory still has the .ibd files with the correct content in them and I can view the table list in the databases, but not the tables themselves. (I have file-per-table enabled, and am using InnoDB as default for everything.) For example with the urls database and its urls table, I can successfully open mysql.exe or phpMyAdmin and use urls;. I can even show tables; to see the expected table, but then when I try to describe urls; or select * from urls;, it complains that the table does not exist (even though it just listed it). (The MySQL Administrator lists the databases, but does not even list the tables, it indicates that the dbs are completely empty.) The problem now is that I have already deleted the SQL file (and cannot recover it even after scouring my hard-drive). So I am trying to figure out a way to repair these databases/tables. I can’t use the table repair function since it complains that the table does not exist, and I can’t dump them because again, it complains that the tables don’t exist. Like I’ve said, the data itself is still present in the .ibd files and the table names are present. I just need a way to get MySQL to recognize that the tables exist in the databases (I can find the column names of the tables in question in the ibdata1 file using a hex-editor). Any idea how I can repair this type of corruption? I don’t mind rolling up my sleeves, digging in, and taking a bunch of steps to fix it. Thanks a lot.

    Read the article

  • mysqldump --where with = operator doesn't get all rows = - Help!

    - by JonathanLIVE
    I have a situation with a particular table that now thinks it contains 4 Petabytes of data. I know that sounds cool, but I assure you, it is only on a 60GB partition. This table has 9 fields in it. One of them is a domain_id field. It is the best field to identify the rows by, as there are only approximately 6300 of them. The only other field option to match has over 2million records, and thats just more difficult. I cannot do a straight mysqldump because it will attempt to output all 4PB of data and fill the drive long before it gets close to that, so I need to surgically remove the good stuff, destroy the db, and recreate it. I believe if I can do a dump for each domain_id record, then I will get most of the usable data out of it. This is what I am trying to use: mysqldump -u root --skip-opt -q --no-create-info --skip-add-drop-table --max_allowed_packet=1000000000 database table --where="domain_id=10" domains10.sql Using this I expect every row with the domain_id 10 to be exported. However, when I check the export, I am only getting 1 row, when however I look at the db, there are many many rows. It is as though the operator just finds one, then gives up. I have tried various operators. Using the < or I am able to get more of the data, but the export stops short at certain rows where the data has been compromised. With over 6000 to go through, I can't narrow down which rows are being affected in the export easily enough. So, what I need is an operator that will basically do what I thought = would do, simply give me an export of all records that match the specific field. Also note, the only way I got this DB even accessible is through an innodb force recovery 3. So I need to get this right, because after this is done, I have to drop the db in order to make mysql functional again. Looking forward to any helpful answers.

    Read the article

  • Removed Old Domain Trust. Now Progress (9.1D) can't open DB File

    - by RLH
    My company has an old server, running Progress 9.1D on a Windows 2000 VM, which was used by our company OS (Vantage 6 by Epicor.) Vantage was our primary OS for a very long time. About 2 years ago, we migrated to a larger, corporate OS and we cancelled our service contract with Epicor. Yesterday, we removed an AD trust between the corporate domain and our old AD domain we used in the days of Vantage. After restarting the virtual server, I have been able to start the ProService for 9.1D Windows service, however, I can not get Vantage to start back up. When I run the application, I get the error in the message listed below. Transcript: ** Could not connect to server for database [progress db file], errno 0. (1432) How can I fix this? FYI, I haven't had to work with Progress in years and even then I wouldn't have considered myself a "novice"-- I'm even less knowledgeable than that title would suggest. Vantage had a lot of internal tools and I recall that Epicor support managed to prevent .pf scripts from being executed. If there was a Progress specific patch that needed to be applied, you had to do it within the Vantage software OR they had to remote into the machine to fix this. I may not be able to run a .pf script but I do know that I can log into the console-based server application. (Yes, I can't even recall which utility that was called. It is sad.) It's been a long time and I never had to digg into Progress that much. Please help and feel free to ask questions. If you need more info, I'll update this post.

    Read the article

  • Understanding MySQL Query Caches and when to implement it?

    - by Jeff
    On our current MySQL server query cache is enabled. Qchache_hits: 31913 Qchache_inserts: 50959 Qchache_lowmem_prunes: 9320 Qchache_not_chached: 209320 Qchache_queries_in_chace: 986 com_update: 0 com_delete: 0 I do not fully understand the Query cache - I am reading about it currently and trying to understand it. Our database holds inventory data, customer data, employee data, sales data and so forth. The query is very rarely run more than once. The possibility of a query being run twice is viewing a specific sales information twice. But basically everything in our system changes constantly. It is always being updated, deleted, insterted and off the top of my head I can't picture users running the same query twice within a week. Do I even need to have the query cache enabled? I am guessing that the inserts means 51k entries have been added, but only 986 of those are being stored? Would an idea be to refresh the cache, and watch it for a week and check how many of the queries in cached are accessed maybe on a weekly basis to see if it is actually returning any benefits? Any help/guidance on this is appreciated, thanks

    Read the article

  • Mysql: Working With 192 Trillion Records... (Yes, 192 Trillion)

    - by Sarah
    Here's the question... Considering 192 trillion records, what should my considerations be? My main concern is speed. Here's the table... CREATE TABLE `ref` ( `id` INTEGER(13) AUTO_INCREMENT DEFAULT NOT NULL, `rel_id` INTEGER(13) NOT NULL, `p1` INTEGER(13) NOT NULL, `p2` INTEGER(13) DEFAULT NULL, `p3` INTEGER(13) DEFAULT NULL, `s` INTEGER(13) NOT NULL, `p4` INTEGER(13) DEFAULT NULL, `p5` INTEGER(13) DEFAULT NULL, `p6` INTEGER(13) DEFAULT NULL, PRIMARY KEY (`id`), KEY (`s`), KEY (`rel_id`), KEY (`p3`), KEY (`p4`) ); Here's the queries... SELECT id, s FROM ref WHERE red_id="$rel_id" AND p3="$p3" AND p4="$p4" SELECT rel_id, p1, p2, p3, p4, p5, p6 FROM ref WHERE id="$id" INSERT INTO rel (rel_id, p1, p2, p3, s, p4, p5, p6) VALUES ("$rel_id", "$p1", "$p2", "$p3", "$s", "$p4", "$p5", "$p6") Here's some notes... The SELECT's will be done much more frequently than the INSERT. However, occasionally I want to add a few hundred records at a time. Load-wise, there will be nothing for hours then maybe a few thousand queries all at once. Don't think I can normalize any more (need the p values in a combination) The database as a whole is very relational. This will be the largest table by far (next largest is about 900k) UPDATE (08/11/2010) Interestingly, I've been given a second option... Instead of 192 trillion I could store 2.6*10^16 (15 zeros, meaning 26 Quadrillion)... But in this second option I would only need to store one bigint(18) as the index in a table. That's it - just the one column. So I would just be checking for the existence of a value. Occasionally adding records, never deleting them. So that makes me think there must be a better solution then mysql for simply storing numbers... Given this second option, should I take it or stick with the first... [edit] Just got news of some testing that's been done - 100 million rows with this setup returns the query in 0.0004 seconds [/edit]

    Read the article

  • Rails Rake Error with XAMPP mysql database

    - by edu222
    I have installed XAAMP on my win7 machine and I have the apache server/mysql running on there. I set up rails to work with XAmpp as described here: XAMPP and RAILS This tutorial advises you to add this code to the XAMPP httpd.connf : Listen 3000 LoadModule rewrite_module modules/mod_rewrite.so ################################# # RUBY SETUP ################################# <virtualHost *:3000> ServerName rails DocumentRoot "c:/xampp/htdocs/FirstProject/public" <Directory "c:/xampp/htdocs/FirstProject/public/"> Options ExecCGI FollowSymLinks AllowOverride all Allow from all Order allow,deny AddHandler cgi-script .cgi AddHandler fastcgi-script .fcgi </Directory> </VirtualHost> ################################# # RUBY SETUP ################################# Xampp runs on the default localhost and mysql remains unchanged without a pw. I created a rails app with a mysql database like this: rails -d mysql C:/xampp/htdocs/FirstProject Then I started the ruby script/server from within the FirstProject location The localhost:3000/ shows the classic rails welcome I then ran a basic scaffold command: ruby script/generate scaffold FirstProject name:string email:string <br/> When I run the rake db:migrate command I get the following error: C:\xampp\htdocs\FirstProject>rake db:migrate --trace (in C:/xampp/htdocs/FirstProject) ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate rake aborted! undefined method `init' for Mysql:Class C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/mysql_adapter.rb:70:in `mysql_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:223:in `send' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:223:in `new_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:245:in `checkout_new_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:188:in `checkout' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:184:in `loop' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:184:in `checkout' C:/Ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:183:in `checkout' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:98:in `connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:326:in `retrieve_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_specification.rb:123:in `retrieve_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_specification.rb:115:in `connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb :435:in `initialize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb :400:in `new' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb :400:in `up' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb :383:in `migrate' C:/Ruby/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/tasks/databases.rake:116 C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `call' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in `invoke_with_call_c hain' C:/Ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_c hain' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:in `invoke' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:in `invoke_task' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exceptio n_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exceptio n_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 C:/Ruby/bin/rake:19:in `load' C:/Ruby/bin/rake:19 Any idea on how to fix this? Thanks in advance

    Read the article

  • migrating simple rails database to mysql

    - by joseph-misiti
    i am interested in creating a rails app with a mysql database. i am new to rails and am just trying to start creating something simple: rails -d mysql MyMoviesSQL cd MyMoviesSQL script/generate scaffold Movies title:string rating:integer rake db:migrate i am seeing the following error: rake aborted! NoMethodError: undefined method `ord' for 0:Fixnum: SET NAMES 'utf8' if i do a trace: ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate rake aborted! NoMethodError: undefined method ord' for 0:Fixnum: SET NAMES 'utf8' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract_adapter.rb:219:inlog' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:323:in execute' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:599:inconfigure_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:594:in connect' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:203:ininitialize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:75:in new' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:75:inmysql_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in send' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:innew_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:245:in checkout_new_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:188:incheckout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in loop' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:incheckout' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in synchronize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:183:incheckout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:98:in connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:326:inretrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:123:in retrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:115:inconnection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:435:in initialize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:400:innew' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:400:in up' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:383:inmigrate' /Library/Ruby/Gems/1.8/gems/rails-2.3.5/lib/tasks/databases.rake:116 /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in call' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:inexecute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:inexecute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in invoke_with_call_chain' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:insynchronize' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in invoke_with_call_chain' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:ininvoke' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:in invoke_task' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in standard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in run' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:instandard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in run' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/bin/rake:31 /usr/bin/rake:19:inload' /usr/bin/rake:19 here are my versions: rails - 2.3.5 ruby - 1.8.6 gem list * LOCAL GEMS * actionmailer (2.3.5, 1.3.6) actionpack (2.3.5, 1.13.6) actionwebservice (1.2.6) activerecord (2.3.5, 1.15.6) activeresource (2.3.5) activesupport (2.3.5, 1.4.4) acts_as_ferret (0.4.1) capistrano (2.0.0) cgi_multipart_eof_fix (2.5.0) daemons (1.0.9) dbi (0.4.3) deprecated (2.0.1) dnssd (0.6.0) fastthread (1.0.1) fcgi (0.8.7) ferret (0.11.4) gem_plugin (0.2.3) highline (1.2.9) hpricot (0.6) libxml-ruby (0.9.5, 0.3.8.4) mongrel (1.1.4) needle (1.3.0) net-sftp (1.1.0) net-ssh (1.1.2) rack (1.0.1) rails (2.3.5) rake (0.8.7, 0.7.3) RedCloth (3.0.4) ruby-openid (1.1.4) ruby-yadis (0.3.4) rubygems-update (1.3.6) rubynode (0.1.3) sqlite3-ruby (1.2.1) termios (0.9.4) also, if i need to add a patch to FixNum, can someone please tell which file to add the patch to. thanks for your help

    Read the article

  • MySQL Database Query Problem

    - by moustafa
    I need your help!!!. I need to query a table in my database that has record of goods sold. I want the query to detect a particular product and also calculate the quantity sold. The product are 300 now, but it would increase in the future. Below is a sample of my DB Table #---------------------------- # Table structure for litorder #---------------------------- CREATE TABLE `litorder` ( `id` int(10) NOT NULL auto_increment, `name` varchar(50) NOT NULL default '', `address` varchar(50) NOT NULL default '', `xdate` date NOT NULL default '0000-00-00', `ref` varchar(20) NOT NULL default '', `code1` varchar(50) NOT NULL default '', `code2` varchar(50) NOT NULL default '', `code3` varchar(50) NOT NULL default '', `code4` varchar(50) NOT NULL default '', `code5` varchar(50) NOT NULL default '', `code6` varchar(50) NOT NULL default '', `code7` varchar(50) NOT NULL default '', `code8` varchar(50) NOT NULL default '', `code9` varchar(50) NOT NULL default '', `code10` varchar(50) NOT NULL default '', `code11` varchar(50) character set latin1 collate latin1_bin NOT NULL default '', `code12` varchar(50) NOT NULL default '', `code13` varchar(50) NOT NULL default '', `code14` varchar(50) NOT NULL default '', `code15` varchar(50) NOT NULL default '', `product1` varchar(100) NOT NULL default '0', `product2` varchar(100) NOT NULL default '0', `product3` varchar(100) NOT NULL default '0', `product4` varchar(100) NOT NULL default '0', `product5` varchar(100) NOT NULL default '0', `product6` varchar(100) NOT NULL default '0', `product7` varchar(100) NOT NULL default '0', `product8` varchar(100) NOT NULL default '0', `product9` varchar(100) NOT NULL default '0', `product10` varchar(100) NOT NULL default '0', `product11` varchar(100) NOT NULL default '0', `product12` varchar(100) NOT NULL default '0', `product13` varchar(100) NOT NULL default '0', `product14` varchar(100) NOT NULL default '0', `product15` varchar(100) NOT NULL default '0', `price1` int(10) NOT NULL default '0', `price2` int(10) NOT NULL default '0', `price3` int(10) NOT NULL default '0', `price4` int(10) NOT NULL default '0', `price5` int(10) NOT NULL default '0', `price6` int(10) NOT NULL default '0', `price7` int(10) NOT NULL default '0', `price8` int(10) NOT NULL default '0', `price9` int(10) NOT NULL default '0', `price10` int(10) NOT NULL default '0', `price11` int(10) NOT NULL default '0', `price12` int(10) NOT NULL default '0', `price13` int(10) NOT NULL default '0', `price14` int(10) NOT NULL default '0', `price15` int(10) NOT NULL default '0', `quantity1` int(10) NOT NULL default '0', `quantity2` int(10) NOT NULL default '0', `quantity3` int(10) NOT NULL default '0', `quantity4` int(10) NOT NULL default '0', `quantity5` int(10) NOT NULL default '0', `quantity6` int(10) NOT NULL default '0', `quantity7` int(10) NOT NULL default '0', `quantity8` int(10) NOT NULL default '0', `quantity9` int(10) NOT NULL default '0', `quantity10` int(10) NOT NULL default '0', `quantity11` int(10) NOT NULL default '0', `quantity12` int(10) NOT NULL default '0', `quantity13` int(10) NOT NULL default '0', `quantity14` int(10) NOT NULL default '0', `quantity15` int(10) NOT NULL default '0', `amount1` int(10) NOT NULL default '0', `amount2` int(10) NOT NULL default '0', `amount3` int(10) NOT NULL default '0', `amount4` int(10) NOT NULL default '0', `amount5` int(10) NOT NULL default '0', `amount6` int(10) NOT NULL default '0', `amount7` int(10) NOT NULL default '0', `amount8` int(10) NOT NULL default '0', `amount9` int(10) NOT NULL default '0', `amount10` int(10) NOT NULL default '0', `amount11` int(10) NOT NULL default '0', `amount12` int(10) NOT NULL default '0', `amount13` int(10) NOT NULL default '0', `amount14` int(10) NOT NULL default '0', `amount15` int(10) NOT NULL default '0', `totalNaira` double(20,0) NOT NULL default '0', `totalDollar` int(20) NOT NULL default '0', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 COMMENT='InnoDB free: 4096 kB; InnoDB free: 4096 kB; InnoDB free: 409'; #---------------------------- # Records for table litorder #---------------------------- insert into litorder values (27, 'Sanyaolu Fisayo', '14 Adegboyega Street Palmgrove Lagos', '2010-05-31', '', 'DL 001', 'DL 002', 'DL 003', '', '', '', '', '', '', '', '', '', '', '', '', 'AILMENT & PREVENTION DVD- ENGLISH', 'AILMENT & PREVENTION DVD- HAUSA', 'BEAUTY CD', '', '', '', '', '', '', '', '', '', '', '', '', 800, 800, 3000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16, 16, 20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 12800, 12800, 60000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, '85600', 563), (28, 'Irenonse Esther', 'Lagos,Nigeria', '2010-06-01', '', 'DL 005', 'DL 008', 'FC 004', '', '', '', '', '', '', '', '', '', '', '', '', 'GET HEALTHY DVD', 'YOUR FUTURE DVD', 'FOREVER FACE CAP (YELLOW)', '', '', '', '', '', '', '', '', '', '', '', '', 1000, 900, 2000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2000, 1800, 6000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, '9800', 64), (29, 'Kalu Lekway', 'Lagos, Nigeria', '2010-06-01', '', 'DL 001', 'DL 003', '', '', '', '', '', '', '', '', '', '', '', '', '', 'AILMENT & PREVENTION DVD- ENGLISH', 'BEAUTY CD', '', '', '', '', '', '', '', '', '', '', '', '', '', 800, 3000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2400, 18000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, '20400', 133), (30, 'Dele', 'Ilupeju', '2010-06-02', '', 'DL 001', 'DL 003', '', '', '', '', '', '', '', '', '', '', '', '', '', 'AILMENT & PREVENTION DVD- ENGLISH', 'BEAUTY CD', '', '', '', '', '', '', '', '', '', '', '', '', '', 800, 3000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8000, 30000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, '38000', 250);

    Read the article

  • Database connection timeout

    - by Clinton Bosch
    Hi I have read so many articles on the Internet about this problem but none seem to have a clear solution. Please could someone give me a definite answer as to why I am getting database timeouts. The app is a GWT app that is being hosted on a Tomcat 5.5 server. I use spring and the session factory is created in the applicationContext.xml as follows <bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">${connection.dialect}</prop> <prop key="hibernate.connection.username">${connection.username}</prop> <prop key="hibernate.connection.password">${connection.password}</prop> <prop key="hibernate.connection.url">${connection.url}</prop> <prop key="hibernate.connection.driver_class">${connection.driver.class}</prop> <prop key="hibernate.show_sql">${show.sql}</prop> <prop key="hibernate.hbm2ddl.auto">update</prop> <prop key="hibernate.c3p0.min_size">5</prop> <prop key="hibernate.c3p0.max_size">20</prop> <prop key="hibernate.c3p0.timeout">1800</prop> <prop key="hibernate.c3p0.max_statements">50</prop> <prop key="hibernate.c3p0.idle_test_period">300</prop> </props> </property> <property name="annotatedClasses"> <list> <value>za.co.xxxx.traintrack.server.model.Answer</value> <value>za.co.xxxx.traintrack.server.model.Company</value> <value>za.co.xxxx.traintrack.server.model.CompanyRegion</value> <value>za.co.xxxx.traintrack.server.model.Merchant</value> <value>za.co.xxxx.traintrack.server.model.Module</value> <value>za.co.xxxx.traintrack.server.model.Question</value> <value>za.co.xxxx.traintrack.server.model.User</value> <value>za.co.xxxx.traintrack.server.model.CompletedModule</value> </list> </property> </bean> <bean id="dao" class="za.co.xxxx.traintrack.server.DAO"> <property name="sessionFactory" ref="sessionFactory"/> <property name="adminUsername" value="${admin.user.name}"/> <property name="adminPassword" value="${admin.user.password}"/> <property name="godUsername" value="${god.user.name}"/> <property name="godPassword" value="${god.user.password}"/> </bean> All works fine untile the next day: INFO | jvm 1 | 2010/06/15 14:42:27 | 2010-06-15 18:42:27,804 WARN [JDBCExceptionReporter] : SQL Error: 0, SQLState: 08S01 INFO | jvm 1 | 2010/06/15 14:42:27 | 2010-06-15 18:42:27,821 ERROR [JDBCExceptionReporter] : The last packet successfully received from the server was 38729 seconds ago.The last packet sent successfully to the server was 38729 seconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. INFO | jvm 1 | 2010/06/15 14:42:27 | Jun 15, 2010 6:42:27 PM org.apache.catalina.core.ApplicationContext log INFO | jvm 1 | 2010/06/15 14:42:27 | SEVERE: Exception while dispatching incoming RPC call I have read so many different things (none of which worked), please help

    Read the article

  • How to find an entry-level job after you already have a graduate degree?

    - by Uri
    Note: I asked this question in early 2009. A couple of months later, I found a great job. I've previously updated this question with some tips for whoever ends up in a similar situation, and now cleaned it up a little for the benefit of the fresh batch of graduates. Original post: In my early 20s I abandoned a great C++ development career path in a major company to go to graduate school and get a research masters (3 years). I did another year in industrial research, and then moved to the US to attend graduate school again, getting another masters and a Ph.D in software engineering from a top school (another 6 years down the drain). I was coding the whole way throughout my degrees (core Java and Eclipse plug-ins) and working on research related to software engineering (usability of APIs). I ended up graduating the year of the recession, with a son on the way and the prospects of no healthcare. Academic jobs and industrial research jobs are quite scarce. Initially, I was naive, thinking that with my background, I could easily find a coding job. Big mistake. It turns out that I'm in a complicated position. Entry level positions are usually offered to college undergraduates. I attended my school's career fairs, but you could immediately see signs of Ph.D. aversion and overqualification issues. Some of the recruiters I spoke with explicitly told me that they wanted 20 year olds with clean slates, and some were looking for interns since they are in various forms of hiring freezes. I managed to get a couple of interviews from these career fairs and through recruiters. However, since I've been out of school for a long time and programming primarily in Java, I am also no longer proficient in C/C++ and the usual range of college-level interview questions that everyone uses. I had no problems with this when I was 19 and interviewing for my first job since a lot of what you do in C is manipulate pointers and I was coding C++ for fun and for school. Later I was routinely doing pointer manipulation on the job, and during my first masters taught college courses with data structures and C++. But even though I remember many properties of C++ well, it's been close to ten years since I regularly used C++ and pointers. As a Java developer I rarely had to work at this level, but experience in OOD and in writing good maintainable code is meaningless for C++ interviews. Reading books as a refresh and looking at sample code did not do the trick. I also looked at mid-to-senior level Java positions, but most of them focused on J2EE APIs rather than on core Java and required a certain number of years in industrial positions. Coding research tools and prior C++ experience doesn't count. So that sends me back to entry-level jobs that are posted through job-boards, and these are not common (mostly they are Monster junk), and small companies are even less likely to answer a Ph.D. compared to the giants who participate in top-10 career fairs. Even worse, in many companies initial screening is done by HR folks who really don't want to deal with anything anomalous like a Ph.D. Any tips on how I should approach this intractable position? For example, what should I write in cover letters? Note that while immigration is not an issue for me, I cannot go freelance as I need the benefits (and in particular group health insurance). During my studies I had no time to contribute to open-source projects or maintain a popular blog, so even if I invested in that now there would be no immediate benefit. Updates: In the two months after posting this I received several offers to work as a core Java developer in the financial industry and accepted one from a firm where I am working to this day. For those who find themselves in similar situations, here are my tips: Give up on trying to find an entry level positions. You can't undo time. Accept the fact that there is Ph.D. discrimination in the job market (some might say rightfully so). It is legal to discriminate based on education. No point fighting it. The most important tip is to focus on the language you are comfortable with. The sad truth about programming in a particular language is that it is not like riding a bike. If you haven't used a language in the last few years, and can't actually apply it routinely (not just as a refresher) before you start your search, it is going to be very difficult to do well in an interview. Now that I'm interviewing others, I routinely see it in folks with a mixed C++/Java background. We maintain "a shadow" of the old language but end up with a weird mix that makes it hard to interview on either. Entry-level folks are at an advantage here since they usually have one language. Memory can help you do great in a screening interview, but without recent day-to-day experience, code tests will be difficult. Despite the supposed relation, core Java programming and J2EE programming are two different things with different skillsets. If you come from academia, you likely have very little J2EE experience and may find it hard to get accepted for a J2EE job. J2EE jobs seem to have a larger list of acronyms in their requirements. In addition, from interviewing J2EE developers it seems that for many there is a focus on mastering specific APIs and architectures, whereas core Java development tends to be secondary. In the same way that I can no longer manipulate pointers well, a J2EE developer may have difficulties doing low level Java manipulation. This puts you at a relative advantage in competing for core Java jobs! If you are able to work for startups (in terms of family life and stability) or migrate to startup-rich areas such as the west coast, you can find many exciting opportunities where advanced degrees are a benefit. I've since been approached by several startups, although I had to decline. Work through a recruiter if possible. They have direct contacts with the hiring parties, allowing you to "stand out". It is better to get a clear yes/no confirmation from a recruiter on whether a company might be interested in interviewing you, than it is to send your resume and hope that someone will ever see it. Recruiters are also a great way of bypassing HR. However, also beware of recruiters. They have a vested interest and will go to various shady practices and pressure tactics. To find a good recruiter, talk to a friend who declined a job offer he got through a recruiter. A good recruiter, to me, is measured in how they handle that. Interview for the jobs that require your core strength. If you're rusty or entirely unfamiliar with a technology around which the job revolves, you're probably not a good match. Yes, you probably have the talent to master them, but most companies would want "instant gratification". I got my offers from companies that wanted core Java developer. I didn't do well on places that wanted advance C++ because I am too rusty and not up to date on recent libraries. I also didn't hear from companies that wanted lots of J2EE experience, and that's ok. Finding companies that want core Java without web is harder, but exists in specific industries (e.g., finance, defense). This requires a lot more legwork in terms of search, but these jobs do exist. There are different interview styles. Some companies focus on puzzles, some companies focus on algorithms, and some companies focus on design and coding skills. I had the most success in places where the questions were the most related to the function I would have been performing. Pick companies accordingly as well.

    Read the article

  • Adding to database. No repeat on refresh

    - by kevstarlive
    I have this code: Episode.php <?$feedback = new feedback; $articles = $feedback->fetch_all(); if (isset($_POST['name'], $_POST['post'])) { $cast = $_GET['id']; $name = $_POST['name']; $email = $_POST['email']; $post = nl2br ($_POST['post']); $ipaddress = $_SERVER['REMOTE_ADDR']; if (empty($name) or empty($post)) { $error = 'All Fields Are Required!'; }else{ $query = $pdo->prepare('INSERT INTO comments (cast, name, email, post, ipaddress) VALUES(?, ?, ?, ?, ?)'); $query->bindValue(1, $cast); $query->bindValue(2, $name); $query->bindValue(3, $email); $query->bindValue(4, $post); $query->bindValue(5, $ipaddress); $query->execute(); } }?> <div align="center"> <strong>Give us your feedback?</strong><br /><br /> <?php if (isset($error)) { ?> <small style="color:#aa0000;"><?php echo $error; ?></small><br /><br /> <?php } ?> <form action="episode.php?id=<?php echo $data['cast_id']; ?>" method="post" autocomplete="off" enctype="multipart/form-data"> <input type="text" name="name" placeholder="Name" /> / <input type="text" name="email" placeholder="Email" /><small style="color:#aa0000;">*</small><br /><br /> <textarea rows="10" cols="50" name="post" placeholder="Comment"></textarea><br /><br /> <input type="submit" onclick="myFunction()" value="Add Comment" /> <br /><br /> <small style="color:#aa0000;">* <b>Email will not be displayed publicly</b></small><br /> </form> </div> Include.php class feedback { public function fetch_all(){ global $pdo; $query = $pdo->prepare("SELECT * FROM comments"); $query->bindValue(1, $cast); $query->execute(); return $query->fetchAll(); } } This code updates to the database as it is suppose to. But after submission it reloads the current page as mentioned in the form action. But when I refresh the page to see the comment being added it asks to re submit. If I hit submit then the comment adds again. How can I stop this from happening? Maybe I could hide the comment box and display a thank you message but that would not stop a repeat entry. Please help. Thank you. Kev

    Read the article

  • Zend models and database relathionships

    - by user608341
    Hi people, i'm starting with Zend Framework and I'm a little bit confused with models and relathionships (one-to-many, many-to-many etc). The "Zend Framework Quick Start" says to create a Zend_Db_Table, a Data Mapper and finally our model class Suppose we have a database like this: table A ( id integer primary key, name varchar(50) ); table B ( id integer primary key, a_id integer references A ); then, i'll create: Application_Model_DbTable_A extends Zend_Db_Table_Abstract, Application_Model_AMapper, Application_Model_A, Application_Model_DbTable_B extends Zend_Db_Table_Abstract, Application_Model_BMapper, Application_Model_B, if I understood, i've to store the references informations in Application_Model_DbTable_A: protected $_dependentTables = array('B'); and Application_Model_DbTable_B: protected $_referenceMap = array( 'A' => array( 'columns' => array('a_id'), 'refTableClass' => 'A', 'refColums' => array('id') ) ); and my models class: class Application_Model_A { protected $_id; protected $_name; public function __construct(array $options = null) { if(is_array($options)) { $this->setOptions($options); } } public function __set($name, $value) { $method = 'set' . $name; if (('mapper' == $name) || !method_exists($this, $method)) { throw new Exception('Invalid property'); } $this->$method($value); } public function __get($name) { $method = 'get' . $name; if (('mapper' == $name) || !method_exists($this, $method)) { throw new Exception('Invalid property'); } return $this->$method(); } public function setOptions(array $options) { $methods = get_class_methods($this); foreach ($options as $key => $value) { $method = 'set' . ucfirst($key); if (in_array($method, $methods)) { $this->$method($value); } } return $this; } public function setName($name) { $this->_name = (string) $name; return $this; } public function getName() { return $this->_name; } public function setId($id) { $this->_id = (int) $id; return $this; } public function getId() { return $this->_id; } class Application_Model_B { protected $_id; protected $_a_id; public function __construct(array $options = null) { if(is_array($options)) { $this->setOptions($options); } } public function __set($name, $value) { $method = 'set' . $name; if (('mapper' == $name) || !method_exists($this, $method)) { throw new Exception('Invalid property'); } $this->$method($value); } public function __get($name) { $method = 'get' . $name; if (('mapper' == $name) || !method_exists($this, $method)) { throw new Exception('Invalid property'); } return $this->$method(); } public function setOptions(array $options) { $methods = get_class_methods($this); foreach ($options as $key => $value) { $method = 'set' . ucfirst($key); if (in_array($method, $methods)) { $this->$method($value); } } return $this; } public function setA_id($a_id) { $this->_a_id = (int) $a_id; return $this; } public function getA_id() { return $this->_a_id; } public function setId($id) { $this->_id = (int) $id; return $this; } public function getId() { return $this->_id; } it's that right?

    Read the article

  • A Big Data korszakban, túl az 1000. eladott Oracle Exadata Database Machine adatbázisgépen

    - by user645740
    Mint azt már egy ideje a szél is fújja, beköszöntött a BIG DATA korszak, azaz egyre több adat gyulik, egyre több adattal gazdálkodunk. A hatalmas mennyiségu adat jó részét Oracle adatbázisokban tárolják. Mi is futtathatná jobban, gyorsabban és hetékonyabban ezeket az Oracle adatbázisokat, mint az Oracle stratégiai high-end megoldása az Oracle Exadata Database Machine? Rengeteg forrása van a sok adatnak, néhány példa, ahol a növekedés óriási: kommunikációs adatok, CDR-ek banki és kormányzati tranzakciók hely információk spatial, location, GPS,..., mint ahogyan a közelmúltban az egyes telefonokkal ésoperációs rendszerekkel kapcsolatos "ügyekben" is olvashattuk, e-mail-ek, közösségi site-ok, intelligens méromuszerek, háztartási berendezések, .... Milyen ütemben no az Exadata értékesítés? Nos az Exadata 2008 oszén lett bejelentve. Az Oracle pénzügyi év végén a jelentésben azt olvashatjuk, hogy az Exadata páratlanul sikeres megoldás, már több mint 1000 Exadatát vásároltak meg az Oracle ügyfelek, mondta Mark Hurd, az Oracle alelnöke:   “In addition to record setting software sales, our Exadata and Exalogic systems also made a strong contribution to our growth in Q4,” said Oracle President, Mark Hurd. “Today there are more than 1,000 Exadata machines installed worldwide. Our goal is to triple that number in FY12.” Larry Ellison, az Oracle elso embere, azt nyilatkozta, hogy mind a felho - cloud computing, mind a memória-adatbázisok területén egyre gyorsabban növekszik az Oracle:   “In FY11 Oracle’s database business experienced its fastest growth in a decade,” said Oracle CEO, Larry Ellison. “Over the past few years we added features to the Oracle database for both cloud computing and in-memory databases that led to increased database sales this past year. Lately we’ve been focused on the big business opportunity presented by Big Data.” A Big Data korszakban  megtakarításokat érhetünk el az Exadatával, tekintse meg a következo videót, de óvatosan, mert gondolkodásra késztet:    -   Oracle Exadata: Are You Ready?.

    Read the article

  • webhost4life, please give me, my data back. My website will not work without the database.

    - by Shervin Shakibi
    I have about 4 or 5 accounts with WebHost4life.com, these are all my customers that based on my recommendation have been hosting with webhost4life.com. A few days ago for some reason they decided to migrate one of these accounts to a new server. They moved everything created a new database on the new server but the new database is empty. after spending hours with Tech support they acknowledged the problem and assured me it will take up to an hour or two and my database will be populated with the data. this was about 7 hours ago. Oh by the way I pay extra for the backup plan and yes you guessed it, none of my backups are there. Needless to say I’m very scared and disappointed. No one is responding to my emails  or phone calls. After searching the web, I found out, this has happened before, in some cases it took them days to fix the problem and many never got it resolved and switched hosting companies, I would love to do that but I need my 2 GB database before I start shopping around for a new hosting company. Stay away from Webhost4life.

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >