Search Results

Search found 29093 results on 1164 pages for 'oracle mysql cluster'.

Page 412/1164 | < Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >

  • ?????3:????“??”

    - by Todd Bao
    ?????SQL???????... =D select c from(select * from(select 'oracle' cc, level no from dual connect by level <= length('oracle'))model return updated rowsdimension by (no)measures (cc c, no n)rules (   c[any] = substr(c[cv()],n[cv()],1)))/ ????itpub?????????oracle????????,???????,?????????“???”,????????????????????????????????????=D ??? Todd ????:http://www.itpub.net/forum.php?mod=viewthread&action=printable&tid=1253982 Todd

    Read the article

  • ??????????

    - by Steve He(???)
    ????????... ?????????? ???Oracle????????????,??????????????????????? ???????????,?????????????,??????,??????????????????????,????????????,?????... Oracle??????,Oracle???????,??????????,??????????????????????????????????????????????,???????,?????????,????????????? ????????????????????????????????????????????????????????????????????????????,????????? ???????????,?????????????????????????,??,????????????????,????????????????????????????? ??????????: Work in Progress????????????????? Development Working ???????????????Oracle???????bug????? ?????????,?????????????????: Customer Working ??????????????????????,??????????,????????,??????????? Solution Offered ????????????????,????????????? Auto-Close ? Close Initiated ??????????????????????????????????????????????????????????????????????????????,???????????????????????????????,?????????????? ????????????,??????????????,?????????,??????????????????????????????????????,????????????????,???????????????,???????????????? ??????????,?????????????,???????,?????????????????????,????,????,??????????????????????????,????????????????? ???,?????????????????????,?????????????????????????????????????????,????????????????????????? Help us help you. ????????????,?????????????????????,??????????????????????????????:??,?????????????????

    Read the article

  • Mac OS X 10.6 Setup for Apache/MySQL/Perl

    - by Russell C.
    I just got a new Mac and have been trying to setup a local development environment for my perl applications for a few days now with no luck. I'm getting no where fast so I hope someone else who has done this successfully could help. I started by installing MAMP which I thought would take care of everything for me but unfortunately it doesn't take care of some important perl modules. I used CPAN to install all our required modules except that it seems DBD::mysql doesn't install correctly through CPAN. After reading a lot online, lots of people reported problems with this and recommended using MacPorts to install the module which I have tried doing with no luck using the following command: sudo port install p5-dbd-mysql After what seems like a successful install of DBD::mysql, Apache continues to report the following error when trying to run any of our Perl scripts: [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] install_driver(mysql) failed: Can't locate DBD/mysql.pm in @INC (@INC contains: /Library/Perl/Updates/5.10.0/darwin-thread-multi-2level /Library/Perl/Updates/5.10.0 /System/Library/Perl/5.10.0/darwin-thread-multi-2level /System/Library/Perl/5.10.0 /Library/Perl/5.10.0/darwin-thread-multi-2level /Library/Perl/5.10.0 /Network/Library/Perl/5.10.0/darwin-thread-multi-2level /Network/Library/Perl/5.10.0 /Network/Library/Perl /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level /System/Library/Perl/Extras/5.10.0 .) at (eval 1835) line 3. [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] Perhaps the DBD::mysql perl module hasn't been fully installed, [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] or perhaps the capitalisation of 'mysql' isn't right. [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] Available drivers: DBM, ExampleP, File, Gofer, Proxy, SQLite, Sponge. I'm not sure where to go from here but my Mac isn't much of a development environment if Perl isn't able to talk to the database. I'd really appreciate any help and advice you might be able to provide in getting my system setup successfully. Thanks in advance!

    Read the article

  • How to setup port forwarding from my Webserver (apache) to my Database server (mysql)

    - by karman888
    Hello again guys, and thank you for your help so far. Here is my problem: I have two remote dedicated servers, one webserver that runs apache, and one db server that runs mysql. The apache server is visible on the internet of course, but the second server is only visible to the apache server because they are connected with LAN. I need to connect to the remote mysql server through internet from my home-pc , but only apache server is visible to my home-pc. How can i setup port-forwarding from my apache server to the mysql server so i will be able to "see" the mysql server from my home-pc? This question is a follow-up from my first question Connect to remote mysql server from my application. Problem is that Mysql server is on LAN in which you answered me and helped me a lot by telling me to do "port-forwarding". I looked over the internet, and i cant find a good how-to to do port-forwarding. I'm an experienced programmer, but have little experience on hardware and networks. I can understand though what must be done, so i just need a litle help to sort things out :) I hope you can help me guys, Thank you in advance p.s. machine that Apache is running is on CentOS, mysql server also CentOS. p.s2 webserver runs WebHostManager i dont know if that makes any difference or it can be made easily through this, i just mention it :)

    Read the article

  • How do we greatly optimize our MySQL database (or replace it) when using joins?

    - by jkaz
    Hi there, This is the first time I'm approaching an extremely high-volume situation. This is an ad server based on MySQL. However, the query that is used incorporates a lot of JOINs and is generally just slow. (This is Rails ActiveRecord, btw) sel = Ads.find(:all, :select = '*', :joins = "JOIN campaigns ON ads.campaign_id = campaigns.id JOIN users ON campaigns.user_id = users.id LEFT JOIN countries ON countries.campaign_id = campaigns.id LEFT JOIN keywords ON keywords.campaign_id = campaigns.id", :conditions = [flashstr + "keywords.word = ? AND ads.format = ? AND campaigns.cenabled = 1 AND (countries.country IS NULL OR countries.country = ?) AND ads.enabled = 1 AND campaigns.dailyenabled = 1 AND users.uenabled = 1", kw, format, viewer['country'][0]], :order = order, :limit = limit) My questions: Is there an alternative database like MySQL that has JOIN support, but is much faster? (I know there's Postgre, still evaluating it.) Otherwise, would firing up a MySQL instance, loading a local database into memory and re-loading that every 5 minutes help? Otherwise, is there any way I could switch this entire operation to Redis or Cassandra, and somehow change the JOIN behavior to match the (non-JOIN-able) nature of NoSQL? Thank you!

    Read the article

  • Why use bin2hex when inserting binary data from PHP into MySQL?

    - by Atli
    I heard a rumor that when inserting binary data (files and such) into MySQL, you should use the bin2hex() function and send it as a HEX-coded value, rather than just use mysql_real_escape_string on the binary string and use that. // That you should do $hex = bin2hex($raw_bin); $sql = "INSERT INTO `table`(`file`) VALUES (X'{$hex}')"; // Rather than $bin = mysql_real_escape_string($raw_bin); $sql = "INSERT INTO `table`(`file`) VALUES ('{$bin}')"; It is supposedly for performance reasons. Something to do with how MySQL handles large strings vs. how it handles HEX-coded values However, I am having a hard time confirming this. All my tests indicate the exact oposite; that the bin2hex method is ~85% slower and uses ~24% more memory. (I am testing this on PHP 5.3, MySQL 5.1, Win7 x64 - Using a farily simple insert loop.) For instance, this graph shows the private memory usage of the mysqld process while the test code was running: Does anybody have any explainations or reasources that would clarify this? Thanks.

    Read the article

  • Foreign key not working in MySQL: Why can I INSERT a value that's not in the foreign column?

    - by stalepretzel
    I've created a table in MySQL: CREATE TABLE actions ( A_id int NOT NULL AUTO_INCREMENT, type ENUM('rate','report','submit','edit','delete') NOT NULL, Q_id int NOT NULL, U_id int NOT NULL, date DATE NOT NULL, time TIME NOT NULL, rate tinyint(1), PRIMARY KEY (A_id), CONSTRAINT fk_Question FOREIGN KEY (Q_id) REFERENCES questions(P_id), CONSTRAINT fk_User FOREIGN KEY (U_id) REFERENCES users(P_id)); This created the table I wanted just fine (although a "DESCRIBE actions;" command showed me that the foreign keys were keys of type MUL, and I'm not sure what this means). However, when I try to enter a Q_id or a U_id that does not exist in the questions or users tables, MySQL still allows these values. What did I do wrong? How can I prevent a table with a foreign key from accepting invalid data? If I add TYPE=InnoDB to the end, I get an error: ERROR 1005 (HY000): Can't create table './quotes/actions.frm' (errno: 150) Why might that happen? I'm told that it's important to enforce data integrity with functional foreign keys, but also that InnoDB should not be used with MySQL. What do you recommend?

    Read the article

  • How to display out the contents from mysql and path it to an image properly?

    - by Panda
    I was trying very hard to think of a solution for my problem but have drain all my brain cells into this...What happen is: I have a CK editor where it allows user to input both text and images. A good example would be http://ckeditor.com/demo. Something like the red riding good example in the demo. I would like to have text surrounding the image like the example. With CK editor, user can now do that. However, now they will submit the changes and i was wondering how should I save the data? What i have found: After reading my articles and solutions on the web, all the experts would advise not to put image into the mysql database together with the text. I would like to ask if this is the case what should I do to extract the text from the ck editor and store it into mysql database and separately store the images into my web server? I know how to retrieve text from mysql database. But how do you put the retrieved text abd images together such as it appears to be how the user have input in CK editor?? Guys I am sorry if my question is confusing you or is making you angry because i admit I am really a newbie in this and I will learn all that I can...Thanks for all your advice and teachings.

    Read the article

  • UTF-8 MySQL and Charset, pls help me understand this once and for all!

    - by FFish
    Can someone explain me when I set everything to UTF-8 I keep getting those damn ??? MySQL Server version: 5.1.44 MySQL charset: UTF-8 Unicode (utf8) I create a new database name: utf8test collation: utf8_general_ci MySQL connection collation: utf8_general_ci My SQL looks like this: SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO"; CREATE TABLE IF NOT EXISTS `test_table` ( `test_id` int(11) NOT NULL, `test_text` text NOT NULL, PRIMARY KEY (`test_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; INSERT INTO `test_table` (`test_id`, `test_text`) VALUES (1, 'hééélo'), (2, 'wööörld'); My PHP / HTML: <?php $db_conn = mysql_connect("localhost", "root", "") or die("Can't connect to db"); mysql_select_db("utf8test", $db_conn) or die("Can't select db"); // $result = mysql_query("set names 'utf8'"); // this works... why?? $query = "SELECT * FROM test_table"; $result = mysql_query($query); $output = ""; while($row = mysql_fetch_assoc($result)) { $output .= "id: " . $row['test_id'] . " - text: " . $row['test_text'] . "<br />"; } ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html lang="it" xmlns="http://www.w3.org/1999/xhtml" xml:lang="it"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>UTF-8 test</title> </head> <body> <?php echo $output; ?> </body> </html>

    Read the article

  • Entity Framework + MySQL - Why is the performance so terrible?

    - by Cyril Gupta
    When I decided to use an OR/M (Entity Framework for MySQL this time) for my new project I was hoping it would save me time, but I seem to have failed it (for the second time now). Take this simple SQL Query SELECT * FROM POST ORDER BY addedOn DESC LIMIT 0, 50 It executes and gives me results in less than a second as it should (the table has about 60,000 rows). Here's the equivalent LINQ To Entities query that I wrote for this var q = (from p in db.post orderby p.addedOn descending select p).Take(50); var q1 = q.ToList(); //This is where the query is fetched and timed out But this query never even executes it times out ALWAYS (without orderby it takes 5 seconds to run)! My timeout is set to 12 seconds so you can imagine it is taking much more than that. Why is this happening? Is there a way I can see what is the actual SQL Query that Entity Framework is sending to the db? Should I give up on EF+MySQL and move to standard SQL before I lose all eternity trying to make it work? I've recalibrated my indexes, tried eager loading (which actually makes it fail even without the orderby clause) Please help, I am about to give up OR/M for MySQL as a lost cause.

    Read the article

  • does entity framework or mysql provider swallows timeout exceptions on enumeration of result?!

    - by Freddy Rios
    I'm trying to make sense of a situation I have using entity framework on .net 3.5 sp1 + MySQL 6.1.2.0 as the provider. It involves the following code: Response.Write("Products: " + plist.Count() + "<br />"); var total = 0; foreach (var p in plist) { //... some actions total++; //... other actions } Response.Write("Total Products Checked: " + total + "<br />"); Basically the total products is varying on each run, and it isn't matching the full total in plist. Its varies widely, from ~ 1/5th to half. There isn't any control flow code inside the foreach i.e. no break, continue, try/catch, conditions around total++, anything that could affect the count. As confirmation, there are other totals captured inside the loop related to the actions, and those match the lower and higher total runs. I don't find any reason to the above, other than something in entity framework or the mysql provider that causes it to end the foreach when retrieving an item. The body of the foreach can have some good variation in time, as the actions involve file & network access, my best shot at the time is that when it takes beyond certain threshold there is some type of timeout in the underlying framework/provider and instead of causing an exception it is silently reporting no more items for enumeration. Can anyone give some light in the above scenario and/or confirm if the entity framework/mysql provider has the above behavior?

    Read the article

  • Minimizing MySQL output with Compress() and by concatening results?

    - by johnrl
    Hi all. It is crucial that I transfer the least amount of data possible between server and client. Therefore I thought of using the mysql Compress() function. To get the max compression I also want to concatenate all my results in one large string (or several of max length allowed by MySql), to allow for similar results to be compressed, and then compress these/that string. 1st problem (concatenating mysql results): SELECT name,age FROM users returns 10 results. I want to concatenate all these results in one strign on the form: name,age,name,age,name,age... and so on. Is this possible? 2nd problem (compressing the results from above) When I have comstructed the concatenated string as above I want to compress it. If I do: SELECT COMPRESS('myname'); then it just gives me as output the character '-' - sometimes it even returns unprintable characters. How do I get COMPRESS() to return a compressed printable string that I can trasnfer in ex ASCII encoding?

    Read the article

  • Is there a way to sync (two way) tables betwen a mysql server and a local MS Access?

    - by Kailen
    Help me figure out a solution to a (not so unique) problem. My research group has gps devices attached to migratory animals. Every once in a while, a research tech will be within range of an animal and will get the chance to download all the logged points. Each individual spits out a single dbf and new locations are just appended to the end (so the file is just cumulative). These data need to be shared among a research group. Everyone else (besides me) wants to use access, so they can make small edits and prefer that interface. They do not like using MySQL. The solution I came up with is: a) The person who downloads the file goes to a web page, enters animal ID into a form, chooses .dbf file and uploads to a mysql database on the server (I still have to write php code to read the dbf and write sql insert statements from it). b) Everyone syncs from their local access database to the server. (This is natively possible from access but very clunky). Is there a tool (preferably open source), that can compare a access table to mysql table and sync the two (both ways)? Alternatively, does anyone have a more elegant solution? The ultimate goal is to allow everyone to have access to the most current data on their computers using their preferred database app.

    Read the article

  • Difference between SET autocommit=1 and START TRANSACTION in mysql (Have I missed something?)

    - by tkolar
    Hey there, I am reading up on transactions in mysql and am not sure whether I have grasped something specific correctly, and I want to be sure I understood that correctly, so here goes. I know what a transaction is supposed to do, I'm just not sure whether I understood the statement semantics or not. So, my question is, is anything wrong, (and, if that is the case, what is wrong) with the following: By default, autocommit mode is enabled in mysql. Now, SET autocommit=0; will begin a transaction, SET autocommit=1; will implicitly commit. It is possible to COMMIT; as well as ROLLBACK;, in both of which cases autocommit is still set to 0 afterwards (and a new transaction is implicitly started). START TRANSACTION; will basically SET autocommit=0; until a COMMIT; or ROLLBACK; takes place. In other words, START TRANSACTION; and SET autocommit=0; are equivalent, except for the fact that START TRANSACTION; does the equivalent of implicitly adding a SET autocommit=0; after COMMIT; or ROLLBACK; If that is the case, I don't understand http://dev.mysql.com/doc/refman/5.5/en/set-transaction.html#isolevel_serializable - seeing as having an isolation level implies that there is a transaction, meaning that autocommit should be off anyway? And if there is another difference (other than the one described above) between beginning a transaction and setting autocommit, what is it? Thanks a lot in advance for your help!

    Read the article

  • CakePHP 'Cake is NOT able to connect to the database'

    - by kand
    I've looked at this post about a similar issue: CakePHP: Can't access MySQL database and I've tried everything they mentioned in there including: Changing my database.php so that the 'port' attribute for both $default and $test are the location of my mysqld.sock file Changing the 'port' attribute to the actual integer that represents the port in my my.cnf mysql config Changing the mysql socket locations in php.ini to the location of my mysqld.sock file I'm using ubuntu 11.04, apache 2.2.17, mysql 5.1.54, and CakePHP 1.3.10. My install of mysql and apache don't seem to match any conventions, as in, all the config files are there, they are all just in really weird places--I'm not sure why that is, but I've tried reinstalling both programs multiple times with the same results... At any rate, I can log into mysql from the terminal and use it normally, and apache is working because I can see the CakePHP default homepage. I just can't get it to change the message 'Cake is NOT able to connect to the database'. SOLVED: Figured it out, had to change php.ini so that extension_dir pointed to the correct directory and had to add a line extension=mysql.so.

    Read the article

  • Are conditional subqueries optimized out, if the condition is false?

    - by Tobias Schulte
    I have a table foo and a table bar, where each foo might have a bar (and a bar might belong to multiple foos). Now I need to select all foos with a bar. My sql looks like this SELECT * FROM foo f WHERE [...] AND ($param IS NULL OR (SELECT ((COUNT(*))>0) FROM bar b WHERE f.bar = b.id)) with $param being replaced at runtime. The question is: Will the subquery be executed even if param is null, or will the dbms optimize the subquery out? We are using mysql, mssql and oracle. Is there a difference between these regarding the above?

    Read the article

  • Get Your Enterprise Working With Oracle On Track Communication 1.0

    - by Josh Lannin
    The On Track Development team is very pleased to announce that today On Track is available for our customers to download and evaluate.  To learn more about what On Track does start with our whitepaper and datasheet.   If you are a developer, take a look at our documentation and samples posted to our OTN page. For this first blog post, I’ll be speaking to several notable points about our product. Graceful Escalation via Conversations: On Track addresses the “Collaboration Problem” through a single guiding principle – graceful escalation – within the construct of a Conversation. In On Track, collaboration is based on a context (called a “Conversation”) that gracefully escalates in form, structure, and content, as dictated by the particular needs of a given collaboration.  Within that context, On Track provides a rich set of tools to choose from.  These tools provide for communication, coordination, content management, organization, decision making, and analysis -- all essential aspects of collaboration, but not all of them are essential all of the time.  Every collaborative interaction will evolve differently.  Some will evolve to represent work spreading over the course of years and involving a large, distributed team, while others may involve few people and not evolve at all.  Regardless, all collaborative contexts are built from the same parts, utilize the same concepts, and start the same way.  The principle of graceful escalation is that you only use the tools and structure you need; so you only incur the complexity you need. Purposeful Collaboration: Through application integration, On Track Conversations bring enterprise application users the communication and collaboration capabilities required to complete business process.  By association with specific processes or business objects conversations extend the possible interactions and broaden participation to internal or external non-application users and provide a sophisticated interaction experience, all the while enhancing the data set within the owning application.  Purposeful collaboration not only needs to happen in the context of applications, it must support a full range of real-time and long-running interactions to provide the greatest value. Multi Client, Multi Modal: This On Track 1.0 product release includes the same day availability of  multiple clients, including iPhone and iPad applications which are now available on the Apple Store, a fully capable and accessible Outlook Add-In, along with our browser web client.  With each client we have sought to leverage the strengths of each unique device- our iPhone client supports picture and voice posts, the iPad is optimized for meeting room situations and document viewing, and our Outlook add-in allows you to take emails in context and bring them into On Track.  In addition to supporting a diverse array of clients, On Track provides a unified multi modal experience support starting with basic messages moving through to integrated documents with live annotations, snapshots, application sharing, and voice. Next Generation Web Architecture: We believe On Track will help move the bar higher for what users can expect from all web applications, most notably ones that involve real-time activity.  On Track is built from the ground up with an innovative, real-time architecture that leverages the extensive push capabilities of our server.  Whether you are receiving a new message, viewing where crowds of people are collaborating, or doing live annotation on a document with a set of people, that information comes to you immediately without refreshes or moving back and forth between pages.  We’ve leveraged this core architecture across the product experience and raised the user experience bar for this type of application.  As well these capabilities are based on open standards and protocols, and are fully extensible by anyone- enabling sophisticated integrations to be created with a wide variety of both legacy and next-generation applications. Agile Product Development: As a product team we operate using continuous feedback and modified agile development methodologies.  We have thousands of active internal Oracle users who have helped pilot our product for critical business functions, and the On Track product development team uses our product as our primary vehicle for all our collaboration.  Additionally we been working with early access customers who are adopting our technology and providing us valuable feedback - which our process has rapidly realized in improvements to our software.  On Track agility extends to our server as well, which is built to scale, and is very simple to install and configure. We are pleased to make this product announcement and encourage you to join us on Facebook or follow us on Twitter, as well as checking back here for the latest product information.

    Read the article

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • Transferring data from Salesforce using Apex Data Loader to Oracle

    - by Barret
    While attempting to transfer data from Salesforce using Apex Data Loader to Oracle Keep getting the following error: 26937 [databaseAccountExtract] FATAL com.salesforce.dataloader.dao.database.Data baseContext - Error getting value for SQL parameter: nkey__c. Please make sure that the value exists in the configuration file or is passed in. Database conf iguration: insertAccount. The database-conf.xml has the following beans: <bean id="insertAccount" class="com.salesforce.dataloader.dao.database.DatabaseConfig" singleton="true"> <property name="sqlConfig" ref="insertAccountSql"/> <property name="dataSource" ref="dbDataSource"/> </bean> <bean id="insertAccountSql" class="com.salesforce.dataloader.dao.database.SqlConfig" singleton="true"> <property name="sqlString"> <value> INSERT INTO VANTROPO.SF_ACCOUNTCHANNEL (nkey__c) VALUES (@nkey__c@) </value> </property> <property name="sqlParams"> <map> <entry key="nkey__c" value="java.lang.String"/> </map> </property> </bean> The SDL (mapping file) has the following values: # Account Insert Mapping values for query from Salesforce (left) and insert/update to Oracle (right) # SalesforceFieldName=OracleFieldName nkey__c=NKEY__C Any help appreciated.

    Read the article

  • Perl DBI doesn't work with Oracle 11g

    - by John
    I am getting the following error connecting to an Oracle 11g database using a simple perl script: failed: ERROR OCIEnvNlsCreate. Check ORACLE_HOME (Linux) env var or PATH (Windows) and or NLS settings, permissions, etc. at The script is as follows: #!/usr/local/bin/perl use strict; use DBI; if ($#ARGV < 3) { print "Usage: perl testDbAccess.pl dataBaseUser dataBasePassword SID dataBasePort\n"; exit 0; } my ($user, $pwd, $sid, $port) = @ARGV; my $host = `hostname`; my $dbh; my $sth; my $dbname = "dbi:Oracle:HOST=$host;SID=$sid;PORT=$port"; openDbConnection(); closeDbConnection(); sub openDbConnection() { $dbh = DBI->connect ($dbname, $user ,$pwd , { RaiseError => 1}) || die "Database connection not made: $DBI::errstr"; } sub closeDbConnection() { #$sth->finish(); $dbh->disconnect(); } Anyone seen this problem before? /john

    Read the article

  • Parsing SOAP XML in Oracle

    - by user258587
    Hi I am new to Oracle and I am working on something that needs to parse a SOAP request and save the address to DB Tables. I am using the XML parser in Oracle (XMLType) with XPath but am struggling since I can't figure out the way to parse the SOAP request because it has multiple namespaces. Could anyone give me an example? Thanks in advance!!! edit It would be a typical SOAP request similar to the one below. <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:soap="http://soap.service.****.com"> <soapenv:Header /> <soapenv:Body> <soap:UpdateElem> <soap:request> <soap:att1>123456789</soap:att1> <soap:att2 xsi:nil="true" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" /> <soap:att3>L</soap:att3> ..... </soap:request> </soap:UpdateElem> </soapenv:Body> </soapenv:Envelope> I need to retrieve parameters att1, att2... and save them in to a DB table.

    Read the article

  • Mono ASP.NET Oracle Connection

    - by bladepit
    Hello to everybody, if i want to connect to orcale i became the following error: libclntsh.so Description: HTTP 500. Error processing request. Stack Trace: System.DllNotFoundException: libclntsh.so at (wrapper managed-to-native) System.Data.OracleClient.Oci.OciCalls/OciNativeCalls.OCIEnvCreate (intptr&,System.Data.OracleClient.Oci.OciEnvironmentMode,intptr,intptr,intptr,intptr,int,intptr) <0x0005d at System.Data.OracleClient.Oci.OciCalls.OCIEnvCreate (intptr&,System.Data.OracleClient.Oci.OciEnvironmentMode,intptr,intptr,intptr,intptr,int,intptr) [0x00000] in /src/monoscript/mono-2.4.2.3/mcs/class/System.Data.OracleClient/System.Data.OracleClient.Oci/OciCalls.cs:738 at System.Data.OracleClient.Oci.OciEnvironmentHandle..ctor (System.Data.OracleClient.Oci.OciEnvironmentMode) [0x00013] in /src/monoscript/mono-2.4.2.3/mcs/class/System.Data.OracleClient/System.Data.OracleClient.Oci/OciEnvironmentHandle.cs:35 at System.Data.OracleClient.Oci.OciGlue.CreateConnection (System.Data.OracleClient.OracleConnectionInfo) [0x00000] in /src/monoscript/mono-2.4.2.3/mcs/class/System.Data.OracleClient/System.Data.OracleClient/OciGlue.cs:86 at System.Data.OracleClient.OracleConnectionPoolManager.CreateConnection (System.Data.OracleClient.OracleConnectionInfo) [0x00006] in /src/monoscript/mono-2.4.2.3/mcs/class/System.Data.OracleClient/System.Data.OracleClient/OracleConnectionPoolManager.cs:57 at System.Data.OracleClient.OracleConnectionPool.CreateConnection () [0x0000e] in /src/monoscript/mono-2.4.2.3/mcs/class/System.Data.OracleClient/System.Data.OracleClient/OracleConnectionPool.cs:97 at System.Data.OracleClient.OracleConnectionPool.GetConnection () [0x000ba] in /src/monoscript/mono-2.4.2.3/mcs/class/System.Data.OracleClient/System.Data.OracleClient/OracleConnectionPool.cs:74 at System.Data.OracleClient.OracleConnection.Open () [0x00061] in /src/monoscript/mono-2.4.2.3/mcs/class/System.Data.OracleClient/System.Data.OracleClient/OracleConnection.cs:410 at WebServer.Controllers.HomeController.Index () [0x00006] in /home/bhcweb/Projects/Controllers/HomeController.cs:19 at (wrapper dynamic-method) System.Runtime.CompilerServices.ExecutionScope.lambda_method (System.Runtime.CompilerServices.ExecutionScope,System.Web.Mvc.ControllerBase,object[]) <0x00080 at System.Web.Mvc.ActionMethodDispatcher.Execute (System.Web.Mvc.ControllerBase,object[]) <0x0001b at System.Web.Mvc.ReflectedActionDescriptor.Execute (System.Web.Mvc.ControllerContext,System.Collections.Generic.IDictionary2<string, object>) <0x000fd> at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod (System.Web.Mvc.ControllerContext,System.Web.Mvc.ActionDescriptor,System.Collections.Generic.IDictionary2) <0x0001c at System.Web.Mvc.ControllerActionInvoker/c_AnonStoreyB.<m_E () <0x00067 at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter (System.Web.Mvc.IActionFilter,System.Web.Mvc.ActionExecutingContext,System.Func`1) <0x000c4 What is my Problem there? I have read that i have to set my ORACLE_HOME AND LD_LIBRARY_PATH. If i do echo $ORACLE_HOME and $LD_LIBRARY_PATH the path which i have set is coming out: /usr/lib/oracle/xe/app/oracle/product/10.2.0/client/lib This is the path where the libclntsh.so is in. Is this right? Best regards bladepit

    Read the article

  • Ibator didn't generate Oracle varchar2 field

    - by bugbug
    I have table APP_REQ_APPROVE_COMPARE with following fields: "ID" NUMBER NOT NULL ENABLE, "TRACK_NO" VARCHAR2(20 BYTE) NOT NULL ENABLE, "REQ_DATE" DATE NOT NULL ENABLE, "OFFCODE" CHAR(6 BYTE) NOT NULL ENABLE, "COMPARE_CASE_ID" NUMBER NOT NULL ENABLE, "VEHICLE_NAME" VARCHAR2(100 BYTE), "ENGINE_NO" VARCHAR2(100 BYTE), "BODY_NO" VARCHAR2(100 BYTE), "HOLD_SHIP" NUMBER, "OWNERSHIP" VARCHAR2(200 BYTE), "RENT_NAME" VARCHAR2(200 BYTE), "CONTRACT" VARCHAR2(100 BYTE), "CONTRACT_NO" VARCHAR2(100 BYTE), "CONTRACT_DATE" DATE, "ISLAWBREAKERRENT" CHAR(1 BYTE) NOT NULL ENABLE, "MISTAKE_DETAIL" VARCHAR2(4000 BYTE), "COMPARE_REASON" VARCHAR2(4000 BYTE), "CREATE_BY" NUMBER NOT NULL ENABLE, "CREATE_ON" DATE DEFAULT SYSDATE NOT NULL ENABLE, "UPDATE_BY" NUMBER, "UPDATE_ON" DATE, When I generate a java bean using Ibator , I didn't find trackNo, VehicalName, ... (all fields defined as varchar2). What is the problem in my case? Here is my Ibator configuration file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE ibatorConfiguration PUBLIC "-//Apache Software Foundation//DTD Apache iBATIS Ibator Configuration 1.0//EN" "http://ibatis.apache.org/dtd/ibator-config_1_0.dtd"> <ibatorConfiguration> <classPathEntry location="/dos/connector/oracle_jdbc.jar"/> <ibatorContext id="autoPerson" defaultModelType="flat" targetRuntime="Ibatis2Java2"> <jdbcConnection connectionURL="jdbc:oracle:thin:@192.168.42.144:1521:orcl" driverClass="oracle.jdbc.driver.OracleDriver" userId="user" password="password"/> <javaModelGenerator targetPackage="com.ko.model" targetProject="FormConfig"> <property name="enableSubPackages" value="true"/> <property name="trimStrings" value="true"/> </javaModelGenerator> <sqlMapGenerator targetPackage="com.ko.map" targetProject="FormConfig"> <property name="enableSubPackages" value="true"/> </sqlMapGenerator> <daoGenerator targetPackage="com.ko.model.dao" type="SPRING" targetProject="FormConfig" implementationPackage="com.ko.model.dao.impl" > <property name="enableSubPackges" value="true"/> <property name="methodNameCalculator" value="extended"/> </daoGenerator> <table tableName="APP_REQ_APPROVE_COMPARE" domainObjectName="AppReqApproveCompare"/> <ibatorConfiguration>

    Read the article

  • Add new row in a databound form with a Oracle Sequence as the primary key

    - by Ranhiru
    I am connecting C# with Oracle 11g. I have a DataTable which i fill using an Oracle Data Adapter. OracleDataAdapter da; DataTable dt = new DataTable(); da = new OracleDataAdapter("SELECT * FROM Author", con); da.Fill(dt); I have few text boxes that I have databound to various rows in the data table. txtAuthorID.DataBindings.Add("Text", dt, "AUTHORID"); txtFirstName.DataBindings.Add("Text", dt, "FIRSTNAME"); txtLastName.DataBindings.Add("Text", dt, "LASTNAME"); txtAddress.DataBindings.Add("Text", dt, "ADDRESS"); txtTelephone.DataBindings.Add("Text", dt, "TELEPHONE"); txtEmailAddress.DataBindings.Add("Text", dt, "EMAIL"); I also have a DataGridView below the Text Boxes, showing the contents of the DataTable. dgvAuthor.DataSource = dt; Now when I want to add a new row, i do bm.AddNew(); where bm is defined in Form_Load as BindingManagerBase bm; bm = this.BindingContext[dt]; And when the save button is clicked after all the information is entered and validated, i do this.BindingContext[dt].EndCurrentEdit(); try { da.Update(dt); } catch (Exception ex) { MessageBox.Show(ex.Message); } However the problem comes where when I usually enter a row to the database (using SQL Plus) , I use a my_pk_sequence.nextval for the primary key. But how do i specify that when i add a new row in this method? I catch this exception ORA-01400: cannot insert NULL into ("SYSMAN".AUTHOR.AUTHORID") which is obvious because nothing was specified for the primary key. How do get around this? Thanx a lot in advance :)

    Read the article

  • Oracle doesn't remove cursors after closing result set

    - by Vladimir
    Note: we reuse single connection. ************************************************ public Connection connection() {                try {            if ((connection == null) || (connection.isClosed()))            {               if (connection!=null) log.severe("Connection was closed !");                connection = DriverManager.getConnection(jdbcURL, username, password);            }        } catch (SQLException e) {            log.severe("can't connect: " + e.getMessage());        }        return connection;            } ************************************************** public IngisObject[] select(String query, String idColumnName, String[] columns) { Connection con = connection(); Vector<IngisObject> objects = new Vector<IngisObject>(); try {     Statement stmt = con.createStatement();     String sql = query;     ResultSet rs =stmt.executeQuery(sql);//oracle increases cursors count here     while(rs.next()) {        IngisObject o = new IngisObject("New Result");        o.setIdColumnName(idColumnName);                    o.setDatabase(this);        for(String column: columns) o.attrs().put(column, rs.getObject(column));        objects.add(o);        }     rs.close();// oracle don't decrease cursor count here, while it's expected     stmt.close();     } catch (SQLException ex) {     System.out.println(query);     ex.printStackTrace(); }

    Read the article

< Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >