Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 476/1698 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • Application Composer Series: Where and When to use Groovy

    - by Richard Bingham
    This brief post is really intended as more of a reference than an article. The table below highlights two things, firstly where you can add you own custom logic via groovy code (end column), and secondly (middle column) when you might use each particular feature. Obviously this applies only where Application Composer exists, namely Fusion CRM and Oracle Sales Cloud, and is based on current (release 8) functionality. Feature Most Common Use Case Groovy Field Triggers React to run-time data changes. Only fired when the field is changed and upon submit. Y Object Triggers To extend the standard processing logic for an object, based on record creation, updates and deletes. There is a split between these firing events, with some related to UI/ADF actions and others originating in the database. UI Trigger Points: After Create - fires when a new object record is created. Commonly used to set default values for fields. Before Modify - Fires when the end-user tries to modify a field value. Could be used for generic warnings or extra security logic. Before Invalidate - Fires on the parent object when one of its child object records is created, updated, or deleted. For building in relationship logic. Before Remove - Fires when an attempt is made to delete an object record. Can be used to create conditions that prevent deletes. Database Trigger Points: Before Insert in Database - Fires before a new object is inserted into the database. Can be used to ensure a dependent record exists or check for duplicates. After Insert in Database - Fires after a new object is inserted into the database. Could be used to create a complementary record. Before Update in Database -Fires before an existing object is modified in the database. Could be used to check dependent record values. After Update in Database - Fires after an existing object is modified in the database. Could be used to update a complementary record. Before Delete in Database - Fires before an existing object is deleted from the database. Could be used to check dependent record values. After Delete in Database - Fires after an existing object is deleted from the database. Could be used to remove dependent records. After Commit in Database - Fires after the change pending for the current object (insert, update, delete) is made permanent in the current transaction. Could be used when committed data that has passed all validation is required. After Changes Posted to Database - Fires after all changes have been posted to the database, but before they are permanently committed. Could be used to make additional changes that will be saved as part of the current transaction. Y Field Validation Displays a user entered error message based groovy logic validating the field value. The message is shown only when the validation logic returns false, and the logic is triggered only when tabbing out of the field on the user interface. Y Object Validation Commonly used where validation is needed across multiple related fields on the object. Triggered on the submit UI action. Y Object Workflows All Object Workflows are fired upon either record creation or update, along with the option of adding a custom groovy firing condition. Y Field Updates - change another field when a specified one changes. Intended as an easy way to set different run-time values (e.g. pick values for LOV's) plus the value field permits groovy logic entry. Y E-Mail Notification - sends an email notification to specified users/roles. Templates support using run-time value tokens and rich text. N Task Creation - for adding standard tasks for use in the worklist functionality. N Outbound Message - will create and send an XML payload of the related object SDO to a specified endpoint. N Business Process Flow - intended for approval using the seeded process, however can also trigger custom BPMN flows. N Global Functions Utility functions that can be called from any groovy code in Application Composer (across applications). Y Object Functions Utility functions that are local to the parent object. Usually triggered from within 'Buttons and Actions' definitions in Application Composer, although can be called from other code for that object (e.g. from a trigger). Y Add Custom Fields When adding custom fields there are a few places you can include groovy logic. Y Default Value - to add logic within setting the default value when new records are entered. Y Conditionally Updateable - to add logic to set the field to read-only or not. Y Conditionally Required - to add logic to set the field to required or not. Y Formula Field - Used to provide a new aggregate field that is entirely based on groovy logic and other field values. Y Simplified UI Layouts - Advanced Expressions Used for creating dynamic layouts for simplified UI pages where fields and regions show/hide based on run-time context values and logic. Also includes support for the depends-on feature as a trigger. Y Related References This Blog: Application Composer Series Extending Sales Guide: Using Groovy Scripts Groovy Scripting Reference Guide

    Read the article

  • Data Source Security Part 1

    - by Steve Felts
    I’ve written a couple of articles on how to store data source security credentials using the Oracle wallet.  I plan to write a few articles on the various types of security available to WebLogic Server (WLS) data sources.  There are more options than you might think! There have been several enhancements in this area in WLS 10.3.6.  There are a couple of more enhancements planned for release WLS 12.1.2 that I will include here for completeness.  This isn’t intended as a teaser.  If you call your Oracle support person, you can get them now as minor patches to WLS 10.3.6.   The current security documentation is scattered in a few places, has a few incorrect statements, and is missing a few topics.  It also seems that the knowledge of how to apply some of these features isn’t written down.  The goal of these articles is to talk about WLS data source security in a unified way and to introduce some approaches to using the available features.  Introduction to WebLogic Data Source Security Options By default, you define a single database user and password for a data source.  You can store it in the data source descriptor or make use of the Oracle wallet.  This is a very simple and efficient approach to security.  All of the connections in the connection pool are owned by this user and there is no special processing when a connection is given out.  That is, it’s a homogeneous connection pool and any request can get any connection from a security perspective (there are other aspects like affinity).  Regardless of the end user of the application, all connections in the pool use the same security credentials to access the DBMS.   No additional information is needed when you get a connection because it’s all available from the data source descriptor (or wallet). java.sql.Connection conn =  mydatasource.getConnection(); Note: You can enter the password as a name-value pair in the Properties field (this not permitted for production environments) or you can enter it in the Password field of the data source descriptor. The value in the Password field overrides any password value defined in the Properties passed to the JDBC Driver when creating physical database connections. It is recommended that you use the Password attribute in place of the password property in the properties string because the Password value is encrypted in the configuration file (stored as the password-encrypted attribute in the jdbc-driver-params tag in the module file) and is hidden in the administration console.  The Properties and Password fields are located on the administration console Data Source creation wizard or Data Source Configuration tab. The JDBC API can also be used to programmatically specify a database user name and password as in the following.  java.sql.Connection conn = mydatasource.getConnection(“user”, “password”); According to the JDBC specification, it’s supposed to take a database user and associated password but different vendors implement this differently.  WLS, by default, treats this as an application server user and password.  The pair is authenticated to see if it’s a valid user and that user is used for WLS security permission checks.  By default, the user is then mapped to a database user and password using the data source credential mapper, so this API sort of follows the specification but database credentials are one-step removed from the application code.  More details and the rationale are described later. While the default approach is simple, it does mean that only one database user is doing all of the work.  You can’t figure out who actually did the update and you can’t restrict SQL operations by who is running the operation, at least at the database level.   Any type of per-user logic will need to be in the application code instead of having the database do it.  There are various WLS data source features that can be configured to provide some per-user information about the operations to the database. WebLogic Data Source Security Options This table describes the features available for WebLogic data sources to configure database security credentials and a brief description.  It also captures information about the compatibility of these features with one another. Feature Description Can be used with Can’t be used with User authentication (default) Default getConnection(user, password) behavior – validate the input and use the user/password in the descriptor. Set client identifier Proxy Session, Identity pooling, Use database credentials Use database credentials Instead of using the credential mapper, use the supplied user and password directly. Set client identifier, Proxy session, Identity pooling User authentication, Multi Data Source Set Client Identifier Set a client identifier property associated with the connection (Oracle and DB2 only). Everything Proxy Session Set a light-weight proxy user associated with the connection (Oracle-only). Set client identifier, Use database credentials Identity pooling, User authentication Identity pooling Heterogeneous pool of connections owned by specified users. Set client identifier, Use database credentials Proxy session, User authentication, Labeling, Multi-datasource, Active GridLink Note that all of these features are available with both XA and non-XA drivers. Currently, the Proxy Session and Use Database Credentials options are on the Oracle tab of the Data Source Configuration tab of the administration console (even though the Use Database Credentials feature is not just for Oracle databases – oops).  The rest of the features are on the Identity tab of the Data Source Configuration tab in the administration console (plan on seeing them all in one place in the future). The subsequent articles will describe these features in more detail.  Keep referring back to this table to see the big picture.

    Read the article

  • extensible database design: automatic ALTER TABLE or serialize() field BLOB ?

    - by mario
    I want an adaptable database scheme. But still use a simple table data gateway in my application, where I just pass an $data[] array for storing. The basic columns are settled in the initial table scheme. There will however arise a couple of meta fields later (ca 10-20). I want some flexibility there and not adapt the database manually each time, or -worse- change the application logic just because of new fields. So now there are two options which seem workable yet not overkill. But I'm not sure about the scalability or database drawbacks. (1) Automatic ALTER TABLE. Whenever the $data array is to be saved, the keys are compared against the current database columns. New columns get defined before the $data is INSERTed into the table. Actually seems simple enough in test code: function save($data, $table="forum") { // columns if ($new_fields = array_diff(array_keys($data), known_fields($table))) { extend_schema($table, $new_fields, $data); } // save $columns = implode("`, `", array_keys($data)); $qm = str_repeat(",?", count(array_keys($data)) - 1); echo ("INSERT INTO `$table` (`$columns`) VALUES (?$qm);"); function known_fields($table) { return unserialize(@file_get_contents("db:$table")) ?: array("id"); function extend_schema($table, $new_fields, $data) { foreach ($new_fields as $field) { echo("ALTER TABLE `$table` ADD COLUMN `$field` VARCHAR;"); Since it is mostly meta information fields, adding them just as VARCHAR seems sufficient. Nobody will query by them anyway. So the database is really just used as storage here. However, while I might want to add a lot of new $data fields on the go, they will not always be populated. (2) serialize() fields into BLOB. Any new/extraneous meta fields could be opaque to the database. Simply sorting out the virtual fields from the real database columns is simple. And the meta fields can just be serialize()d into a blob/text field then: function ext_save($data, $table="forum") { $db_fields = array("id", "content", "flags", "ext"); // disjoin foreach (array_diff(array_keys($data),$db_fields) as $key) { $data["ext"][$key] = $data[$key]; unset($data[$key]); } $data["ext"] = serialize($data["ext"]); Unserializing and unpacking this 'ext' column on read queries is a minor overhead. The advantage is that there won't be any sparsely filled columns in the database, so I guess it's compacter and faster than the AUTO ALTER TABLE approach. Of course, this method prevents ever using one of the new fields in a WHERE or GROUP BY clause. But I think none of the possible meta fields (user_agent, author_ip, author_img, votes, hits, last_modified, ..) would/should ever be used there anyway. So I currently prefer the 'ext' blob approach, even if it's a one-way ticket. How are such columns called usually? (looking for examples/doc) Would you use XML serialization for (very theoretical) in-database queries? The adapting table scheme seems a "cleaner" interface, even if most columns might remain empty then. How does that impact speed? How many such sparse VARCHAR fields can MySQL/innodb stomach? But most importantly: Is there any standard implementation for this? A pseudo ORM with automatic ALTER TABLE tricks? Storing a simple column list seems workable, but something like pdo::getColumnMeta would be more robust.

    Read the article

  • Implementing Audit Trail- Spring AOP vs.Hibernate Interceptor vs DB Trigger

    - by RN
    I found couple of discussion threads on this- but nothing which brought a comparison of all three mechanism under one thread. So here is my question... I need to audit DB changes- insert\updates\deletes to business objects. I can think of three ways to do this 1) DB Triggers 2) Hibernate interceptors 3) Spring AOP (This question is specific to a Spring\Hibernate\RDBMS- I guess this is neutral to java\c# or hibernate\nhibernate- but if your answer is dependent upon C++ or Java or specific implementation of hibernate- please specify) What are the pros and cons of selecting one of these strategies ? I am not asking for implementation details.-This is a design discussion. I am hoping we can make this as a part of community wiki

    Read the article

  • Mixing AJAX requests with Flash scope objects not working

    - by AlanObject
    I have a JSF page that displays a table from an object called TableQuery that supports stateful pagination, sorting, etc. The bean that accesses the object is a RequestScoped object, and it attempts to preserve the TableQuery by storing it the flash map. The accessor method looks like this: public TableQuery<SysLog> getQuery() { if (query != null) return query; Flash flash = FacesContext.getCurrentInstance(). getExternalContext().getFlash(); query = (TableQuery) flash.get("Query"); if (query != null) System.out.println("TableSysLog.getQuery() Got query from flash!"); if (query == null) { query = slc.getNewTableQuery(); System.out.println("TableSysLog.getQuery() Created new query"); } flash.put("Query", query); return query; } The Links to go between pages are implemented with *p:commandLInk*s. I use Primefaces command link in AJAX mode so just the link gets processed when it is clicked. The action listener looks like this: public void doNextPage(ActionEvent evt) { getQuery().doNextPage(); } When it doesn't work I get the error message: WARNING: JSF1095: The response was already committed by the time we tried to set the outgoing cookie for the flash. Any values stored to the flash will not be available on the next request. I found this thread when looking up this problem. When I turned of HTTP chunking as the article suggests, the error message went away but the problem remained. Does anyone know what is going on and how this might be fixed?

    Read the article

  • How to add SQLite (SQLite.NET) to my C# project

    - by Lirik
    I followed the instructions in the documentation: Scenario 1: Version Independent (does not use the Global Assembly Cache) This method allows you to drop any new version of the System.Data.SQLite.DLL into your application's folder and use it without any code modifications or recompiling. Add the following code to your app.config file: <configuration> <system.data> <DbProviderFactories> <remove invariant="System.Data.SQLite"/> <add name="SQLite Data Provider" invariant="System.Data.SQLite" description=".Net Framework Data Provider for SQLite" type="System.Data.SQLite.SQLiteFactory, System.Data.SQLite" /> </DbProviderFactories> </system.data> </configuration> My app.config file now looks like this: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <sectionGroup name="userSettings" type="System.Configuration.UserSettingsGroup, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" > <section name="DataFeed.DataFeedSettings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" allowExeDefinition="MachineToLocalUser" requirePermission="false" /> </sectionGroup> </configSections> <userSettings> <DataFeed.DataFeedSettings> <setting name="eodData" serializeAs="String"> <value>False</value> </setting> </DataFeed.DataFeedSettings> </userSettings> <system.data> <DbProviderFactories> <remove invariant="System.Data.SQLite"/> <add name="SQLite Data Provider" invariant="System.Data.SQLite" description=".Net Framework Data Provider for SQLite" type="System.Data.SQLite.SQLiteFactory, System.Data.SQLite" /> </DbProviderFactories> </system.data> </configuration> My project is called "DataFeed": using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data.SQLite; //<-- Causes compiler error namespace DataFeed { class Program { static void Main(string[] args) { } } } The error I get is: .\dev\DataFeed\Program.cs(5,19): error CS0234: The type or namespace name 'SQLite' does not exist in the namespace 'System.Data' (are you missing an assembly reference?) I'm not using the GAC, I simply dropped the System.Data.SQLite.dll into my .\dev\DataFeed\ folder. I thought that all I needed to do is add the DLL to the project folder as it was mentioned in the documentation. Any hints on how to actually make this work?

    Read the article

  • VB 2008 or VB 2010 Dataset help

    - by Diabolo
    I have three forms similar to the one in the link. I want add a Total textbox to every form. The total will add the values that will be entered in the montant textbox. So the total will take the specific value of that texbox for all the entries that the user will have and add them. How I can do so???? Thanks http://i1006.photobucket.com/albums/af189/diaboloent/OneForm.jpg

    Read the article

  • SQLite UPSERT - ON DUPLICATE KEY UPDATE

    - by Alix Axel
    MySQL has something like this: INSERT INTO visits (ip, hits) VALUES ('127.0.0.1', 1) ON DUPLICATE KEY UPDATE hits = hits + 1; As far as I'm know this feature doesn't exist in SQLite, what I want to know is if there is any way to archive the same effect without having to execute two queries. Also, if this is not possible, what do you prefer: SELECT + (INSERT or UPDATE) or UPDATE (+ INSERT if UPDATE fails)

    Read the article

  • expecting tASSOC in a Rails file

    - by steven_noble
    I'm sure I've done something stupid here, but I just can't see it. I call the breadcrumb method in the application view. app/helpers/breadcrumbs_helper.rb says: module BreadcrumbsHelper def breadcrumb @crumb_list = [] drominay_crumb_builder project_crumb_builder content_tag(:div, :id => "breadcrumbs", @crumb_list.map { |list_item| crumb_builder(list_item) }) end def crumb_builder(list_item) if list_item == @crumb_list.last content_tag(:span, list_item['body'], :class => list_item['crumb']) else body = ["list_item['body']", "&nbsp;&#x2192;&nbsp;"].join link_to(body, list_item['url'], :class => list_item['crumb']) end end def drominay_crumb_builder list_item = Hash.new list_item['body'] = "Drominay" list_item['url'] = "root" @crumb_list << list_item end def project_crumb_builder end end Why oh why am I getting this "expecting tASSOC" error? (And what is a tASSOC anyway?) steven-nobles-imac-200:drominay steven$ script/server => Booting Mongrel (use 'script/server webrick' to force WEBrick) => Rails 2.2.2 application starting on http://0.0.0.0:3000 => Call with -d to detach => Ctrl-C to shutdown server ** Starting Mongrel listening at 0.0.0.0:3000 ** Starting Rails with development environment... Exiting /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require': /Users/steven/Drominay/app/helpers/breadcrumbs_helper.rb:7: syntax error, unexpected ')', expecting tASSOC (SyntaxError) /Users/steven/Drominay/app/helpers/breadcrumbs_helper.rb:29: syntax error, unexpected $end, expecting kEND from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Library/Ruby/Gems/1.8/gems/activesupport-2.2.2/lib/active_support/dependencies.rb:153:in `require' from /Library/Ruby/Gems/1.8/gems/activesupport-2.2.2/lib/active_support/dependencies.rb:521:in `new_constants_in' from /Library/Ruby/Gems/1.8/gems/activesupport-2.2.2/lib/active_support/dependencies.rb:153:in `require' from /Users/steven/Drominay/app/helpers/application_helper.rb:5 from /Library/Ruby/Gems/1.8/gems/activesupport-2.2.2/lib/active_support/dependencies.rb:382:in `load_without_new_constant_marking' from /Library/Ruby/Gems/1.8/gems/activesupport-2.2.2/lib/active_support/dependencies.rb:382:in `load_file' from /Library/Ruby/Gems/1.8/gems/activesupport-2.2.2/lib/active_support/dependencies.rb:521:in `new_constants_in' ... 56 levels... from /Users/steven/.gem/ruby/1.8/gems/rails-2.2.2/lib/commands/server.rb:49 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from script/server:3

    Read the article

  • Hibernate: Found: float, expected: double precision

    - by Frederic Morin
    I have a problem with the mapping of Oracle Float double precision datatype to Java Double datatype. The hibernate schema validator seems to fail when the Java Double datatype is used. org.hibernate.HibernateException: Wrong column type in DB.TABLE for column amount. Found: float, expected: double precision The only way to avoid this is to disable schema validation and hope the schema is in sync with the app about to run. I must fix this before it goes out to production. App's evironment: - Grails 1.2.1 - Hibernate-core 3.3.1.GA - Oracle 10g

    Read the article

  • Can not delete row from MySQL

    - by Drew
    Howdy all, I've got a table, which won't delete a row. Specifically, when I try to delete any row with a GEO_SHAPE_ID over 150000000 it simply does not disappear from the DB. I have tried: SQLyog to erase it. DELETE FROM TABLE WHERE GEO_SHAPE_ID = 150000042 (0 rows affected). UNLOCK TABLES then 2. As far as I am aware, bigint is a valid candidate for auto_increment. Anyone know what could be up? You gotta help us, Doc. We’ve tried nothin’ and we’re all out of ideas! DJS. PS. Here is the table construct and some sample data just for giggles. CREATE TABLE `GEO_SHAPE` ( `GEO_SHAPE_ID` bigint(11) NOT NULL auto_increment, `RADIUS` float default '0', `LATITUDE` float default '0', `LONGITUDE` float default '0', `SHAPE_TYPE` enum('Custom','Region') default NULL, `PARENT_ID` int(11) default NULL, `SHAPE_POLYGON` polygon default NULL, `SHAPE_TITLE` varchar(45) default NULL, `SHAPE_ABBREVIATION` varchar(45) default NULL, PRIMARY KEY (`GEO_SHAPE_ID`) ) ENGINE=MyISAM AUTO_INCREMENT=150000056 DEFAULT CHARSET=latin1 CHECKSUM=1 DELAY_KEY_WRITE=1 ROW_FORMAT=DYNAMIC; SET FOREIGN_KEY_CHECKS = 0; LOCK TABLES `GEO_SHAPE` WRITE; INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (57, NULL, NULL, NULL, 'Region', 10, NULL, 'Washington', 'WA'); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (58, NULL, NULL, NULL, 'Region', 10, NULL, 'West Virginia', 'WV'); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (59, NULL, NULL, NULL, 'Region', 10, NULL, 'Wisconsin', 'WI'); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (150000042, 10, -33.8833, 151.217, 'Custom', NULL, NULL, 'Sydney%2C%20New%20South%20Wales%20%2810km%20r', NULL); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (150000043, 10, -33.8833, 151.167, 'Custom', NULL, NULL, 'Annandale%2C%20New%20South%20Wales%20%2810km%', NULL); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (150000048, 10, -27.5, 153.017, 'Custom', NULL, NULL, 'Brisbane%2C%20Queensland%20%2810km%20radius%2', NULL); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (150000045, 10, 43.1002, -75.2956, 'Custom', NULL, NULL, 'New%20York%20Mills%2C%20New%20York%20%2810km%', NULL); INSERT INTO `GEO_SHAPE` (`GEO_SHAPE_ID`, `RADIUS`, `LATITUDE`, `LONGITUDE`, `SHAPE_TYPE`, `PARENT_ID`, `SHAPE_POLYGON`, `SHAPE_TITLE`, `SHAPE_ABBREVIATION`) VALUES (150000046, 10, 40.1117, -78.9258, 'Custom', NULL, NULL, 'Region1', NULL); UNLOCK TABLES; SET FOREIGN_KEY_CHECKS = 1;

    Read the article

  • Solr error; Anybody know what this means?

    - by Camran
    I am installing solr on my VPS (Ubuntu 9.10) via PuTTY. First, I thought about installing Solr with Tomcat, but then after installing tomcat, I changed my mind and went for the Jetty which comes with Solr. Now that I have setup everything on my Server, and try to start the "start.jar" file, I get some errors... Here is some text from the log file: 2010-05-29 00:22:42.074::INFO: jetty-6.1.3 2010-05-29 00:22:42.134::INFO: Extract jar:file:/var/www/webapps/solr.war!/ to /var/www/work/Jetty_0_0_0_0_8983_solr.war__solr__k1kf17/webapp May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: JNDI not configured for solr (NoInitialContextEx) May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: solr home defaulted to 'solr/' (could not find system property or JNDI) May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader <init> INFO: Solr home set to 'solr/' May 29, 2010 12:22:42 AM org.apache.solr.servlet.SolrDispatchFilter init INFO: SolrDispatchFilter.init() May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: JNDI not configured for solr (NoInitialContextEx) May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: solr home defaulted to 'solr/' (could not find system property or JNDI) May 29, 2010 12:22:42 AM org.apache.solr.core.CoreContainer$Initializer initialize INFO: looking for solr.xml: /var/www/solr/solr.xml May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader <init> INFO: Solr home set to 'solr/' Anybody know what this is? Thanks

    Read the article

  • MySQL Error 1452 - Cannot add or update a child row: a foreign key constraint fails

    - by dscher
    I've looked at other people's questions on this topic but can't seem to find where my error is coming from. Any help would be greatly appreciated. I'm including as much as I can think of that might help find the problem: CREATE TABLE stocks ( id INT AUTO_INCREMENT NOT NULL, user_id INT(11) UNSIGNED NOT NULL, ticker VARCHAR(20) NOT NULL, name VARCHAR(20), rating INT(11), position ENUM("strong buy", "buy", "sell", "strong sell", "neutral"), next_look DATE, privacy ENUM("public", "private"), PRIMARY KEY(id), FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE IF NOT EXISTS `stocks_tags` ( `stock_id` INT NOT NULL, `tag_id` INT NOT NULL, PRIMARY KEY (`stock_id`,`tag_id`), KEY `fk_stock_tag` (`tag_id`), KEY `fk_tag_stock` (`stock_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; ALTER TABLE `stocks_tags` ADD CONSTRAINT `fk_stock_tag` FOREIGN KEY (`tag_id`) REFERENCES `tags` (`id`) ON DELETE CASCADE ON UPDATE CASCADE, ADD CONSTRAINT `fk_tag_stock` FOREIGN KEY (`stock_id`) REFERENCES `stocks` (`id`) ON DELETE CASCADE ON UPDATE CASCADE; CREATE TABLE tags( id INT AUTO_INCREMENT NOT NULL PRIMARY KEY, tags VARCHAR(30), UNIQUE(tags) ) ENGINE=INNODB DEFAULT CHARSET=utf8; And the error I'm getting: Database_Exception [ 1452 ]: Cannot add or update a child row: a foreign key constraint fails (`ddmachine`.`stocks_tags`, CONSTRAINT `fk_stock_tag` FOREIGN KEY (`tag_id`) REFERENCES `tags` (`id`) ON DELETE CASCADE ON UPDATE CASCADE) [ INSERT INTO `stocks_tags` (`stock_id`, `tag_id`) VALUES (19, 'cash') ] I did see that someone else had a similar problem based on their enum columns but don't think that's it.

    Read the article

  • how to rethrow same exception in sql server

    - by Shantanu Gupta
    I want to rethrow same exception in sql server that has been occured in my try block. I am able to throw same message but i want to throw same error. BEGIN TRANSACTION BEGIN TRY INSERT INTO Tags.tblDomain (DomainName, SubDomainId, DomainCode, Description) VALUES(@DomainName, @SubDomainId, @DomainCode, @Description) COMMIT TRANSACTION END TRY BEGIN CATCH declare @severity int; declare @state int; select @severity=error_severity(), @state=error_state(); RAISERROR(@@Error,@ErrorSeverity,@state); ROLLBACK TRANSACTION END CATCH RAISERROR(@@Error, @ErrorSeverity, @state); This line will show error, but i want functionality something like that. This raises error with error number 50000, but i want erron number to be thrown that i am passing @@error, I want to capture this error no at frontend i.e. catch (SqlException ex) { if ex.number==2627 MessageBox.show("Duplicate value cannot be inserted"); } I want this functionality. which can't be achieved using raiseerror. I dont want to give custom error message at back end.

    Read the article

  • Undo Table Partitioning - SQL Server 2008

    - by Binder
    I have a table 'X' and did the following CREATE PARTITION FUNCTION PF1(INT) AS RANGE LEFT FOR VALUES (1, 2, 3, 4) CREATE PARTITION SCHEME PS1 AS PARTITION PF1 ALL TO ([PRIMARY]) CREATE CLUSTERED INDEX CIDX_X ON X(col1) ON PS1(col1) this 3 steps created 4 logical partitions of the data I had. My question is, how do i revert this partitioning to its original state ?

    Read the article

  • Javascript data store solution using PhoneGap

    - by nickcartwright
    Hiya, Does anyone have any experience of storing data in JavaScript across all mobile platforms using PhoneGap? My ideal solution would be to use something like SQLite, but unfortunately SQLite isn't supported across all the platforms PhoneGap supports. I tried to ask this question a little while ago, but it got quite a few negative marks. If you think this is a bad / pointless question I would love to know as it will hopefully help me to understand the problem! Cheers, Nick.

    Read the article

  • Django: making raw SQL query, passing multiple/repeated params?

    - by AP257
    Hopefully this should be a fairly straightforward question, I just don't know enough about Python and Django to answer it. I've got a raw SQL query in Django that takes six different parameters, the first two of which (centreLat and centreLng) are each repeated: query = "SELECT units, (SQRT(((lat-%s)*(lat-%s)) + ((lng-%s)*(lng-%s)))) AS distance FROM places WHERE lat<%s AND lat>%s AND lon<%s AND lon>%s ORDER BY distance;" params = [centreLat,centreLng,swLat,neLat,swLng,neLng] places = Place.objects.raw(query, params) How do I structure the params object and the query string so they know which parameters to repeat and where?

    Read the article

  • ETL Tools and Build Tools

    - by Ngu Soon Hui
    I have familiarities with software automated build tools ( such as Automated Build Studio). Now I am looking at ETL tools. The one thing crosses my mind is that, I can do anything I can do in ETL tools by using a software build tool. ETL tools are tailored for data loading and manipulation for which a lot of scripts are needed in order to do the job. Software build tool, on the other hand, is versatile enough to do any jobs, including writing scripts to extract, transform and load any data from any format into any format. Am I right?

    Read the article

  • Optional one-to-one mapping in Hibernate

    - by hibernate
    How do I create an optional one-to-one mapping in the hibernate hbm file? For example, suppose that I have a User and a last_visited_page table. The user may or may not have a last_visited page. Here is my current one-to-one mapping in the hbm file: User Class: <one-to-one name="lastVisitedPage" class="LastVisitedPage" cascade="save-update"> LastVisitedPage Class: <one-to-one name="user" class="user" constrained="true" /> The above example does not allow the creation of a user who does not have a last visited page. A freshly created user has not visited any pages yet. How do I change the hbm mapping to make the userPrefs mapping optional?

    Read the article

  • How to select the top n from a union of two queries where the resulting order needs to be ranked by individual query?

    - by Jedidja
    Let's say I have a table with usernames: Id | Name ----------- 1 | Bobby 20 | Bob 90 | Bob 100 | Joe-Bob 630 | Bobberino 820 | Bob Junior I want to return a list of n matches on name for 'Bob' where the resulting set first contains exact matches followed by similar matches. I thought something like this might work SELECT TOP 4 a.* FROM ( SELECT * from Usernames WHERE Name = 'Bob' UNION SELECT * from Usernames WHERE Name LIKE '%Bob%' ) AS a but there are two problems: It's an inefficient query since the sub-select could return many rows (looking at the execution plan shows a join happening before top) (Almost) more importantly, the exact match(es) will not appear first in the results since the resulting set appears to be ordered by primary key. I am looking for a query that will return (for TOP 4) Id | Name --------- 20 | Bob 90 | Bob (and then 2 results from the LIKE query, e.g. 1 Bobby and 100 Joe-Bob) Is this possible in a single query?

    Read the article

  • PHP postgres backup.

    - by russell kinch
    Hi. I am trying to make a Postgres PHP backup script. I have downloaded one for the command line which looks like this: #!/bin/bash find /home/russell/pg_bkp -type f -mtime +7 -exec rm {} \; time=`date +%Y-%m-%d`; # date in reverse so that lastest date appears last in the list of backup files. PGPASSWORD=****** pg_dump -i -h localhost -p 5432 -U postgres -F c -b -v -f "/home/russell/pg_bkp/$time.backup" ah3 How can I implement this in PHP? The extension that this creates is .backup. It works great and have used it many times. the data is perfect, but doing it from inside my website would be better. Thanks

    Read the article

  • What are best practices for collecting, maintaining and ensuring accuracy of a huge data set?

    - by Kyle West
    I am posing this question looking for practical advice on how to design a system. Sites like amazon.com and pandora have and maintain huge data sets to run their core business. For example, amazon (and every other major e-commerce site) has millions of products for sale, images of those products, pricing, specifications, etc. etc. etc. Ignoring the data coming in from 3rd party sellers and the user generated content all that "stuff" had to come from somewhere and is maintained by someone. It's also incredibly detailed and accurate. How? How do they do it? Is there just an army of data-entry clerks or have they devised systems to handle the grunt work? My company is in a similar situation. We maintain a huge (10-of-millions of records) catalog of automotive parts and the cars they fit. We've been at it for a while now and have come up with a number of programs and processes to keep our catalog growing and accurate; however, it seems like to grow the catalog to x items we need to grow the team to y. I need to figure some ways to increase the efficiency of the data team and hopefully I can learn from the work of others. Any suggestions are appreciated, more though would be links to content I could spend some serious time reading. THANKS! Kyle

    Read the article

  • Comparing two Oracle schema, other users

    - by Igman
    Hello! I have been tasked with comparing two oracle schema with a large number of tables to find the structural differences in the schema. Up until know I have used the DB Diff tool in Oracle SQL Developer, and it has worked very well. The issue is that now I need to compare tables in a user that I cannot log into , but I can see it through the other users section in SQL developer. The issue is that whenever I try to use the diff tool to compare those objects to the other schema it does not work. Does anyone have any idea how to do this? It would save me a very large amount of work. I have some basic SQL knowledge if that is whats needed. Thanks.

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >