Search Results

Search found 5520 results on 221 pages for 'compound primary'.

Page 202/221 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • PHP Doctrine frustration: loading models doesn't work..?

    - by ropstah
    I'm almost losing it, i really hope someone can help me out! I'm using Doctrine with CodeIgniter. Everything is setup correctly and works until I generate the classes and view the website. Fatal error: Class 'BaseObjecten' not found in /var/www/vhosts/domain.com/application/models/Objecten.php on line 13 I'm using the following bootstrapper (as CodeIgniter plugin): <?php // system/application/plugins/doctrine_pi.php // load Doctrine library require_once BASEPATH . '/plugins/Doctrine/lib/Doctrine.php'; // load database configuration from CodeIgniter require_once APPPATH.'/config/database.php'; // this will allow Doctrine to load Model classes automatically spl_autoload_register(array('Doctrine', 'autoload')); // we load our database connections into Doctrine_Manager // this loop allows us to use multiple connections later on foreach ($db as $connection_name => $db_values) { // first we must convert to dsn format $dsn = $db[$connection_name]['dbdriver'] . '://' . $db[$connection_name]['username'] . ':' . $db[$connection_name]['password']. '@' . $db[$connection_name]['hostname'] . '/' . $db[$connection_name]['database']; Doctrine_Manager::connection($dsn,$connection_name); } // CodeIgniter's Model class needs to be loaded require_once BASEPATH.'/libraries/Model.php'; // telling Doctrine where our models are located Doctrine::loadModels(APPPATH.'/models'); // (OPTIONAL) CONFIGURATION BELOW // this will allow us to use "mutators" Doctrine_Manager::getInstance()->setAttribute( Doctrine::ATTR_AUTO_ACCESSOR_OVERRIDE, true); // this sets all table columns to notnull and unsigned (for ints) by default Doctrine_Manager::getInstance()->setAttribute( Doctrine::ATTR_DEFAULT_COLUMN_OPTIONS, array('notnull' => true, 'unsigned' => true)); // set the default primary key to be named 'id', integer, 4 bytes Doctrine_Manager::getInstance()->setAttribute( Doctrine::ATTR_DEFAULT_IDENTIFIER_OPTIONS, array('name' => 'id', 'type' => 'integer', 'length' => 4)); ?> Anyone? p.s. I also tried adding the following right after // (OPTIONAL CONFIGURATION) Doctrine_Manager::getInstance()->setAttribute(Doctrine::ATTR_MODEL_LOADING, Doctrine::MODEL_LOADING_CONSERVATIVE); spl_autoload_register(array('Doctrine', 'modelsAutoload'));

    Read the article

  • Advanced Django query with subselects and custom JOINS

    - by Bryan Ward
    I have been investigating this number theoretic function (found in the Height model) and I need to query for things based on the prime factorization of the primary key, or id. I have created a model for Factors of the id which maintains all of the prime factors. class Height(models.Model): b = models.IntegerField(null=True, blank=True) c = models.IntegerField(null=True, blank=True) d = models.FloatField(null=True, blank=True) class Factors(models.Model): height = models.ForeignKey(Height, null=True, blank=True) factor = models.IntegerField(null=True, blank=True) degree = models.IntegerField(null=True, blank=True) prime_id = models.IntegerField(null=True, blank=True) For example, if id=24, then the associated entries in the factors table would be height_id=24,factor=2,degree=3,prime_id=0 height_id=24,factor=3,degree=1,prime_id=1 the prime_id keep track of the relative order of the primes. Now let p < q < r < s all be prime numbers and a,b,c,d be positive integers. Then I want to be able to query for all Heights of the form id=(p**a)*(q**b)*(r**c)*(s**d). Now this is simple in the case that all of p,q,r,s,a,b,c,d are known in that I can just run Height.objects.get(id=(p**a)*(q**b)*(r**c)*(s**d)) But I need to be able to query for something like (2**a)*(3**2)*(r**c)*(s**d) where r,s,a,d are unknown and all Heights of such form will be returned. Furthermore, not all of the rows in Height will have exactly four prime factors, so I need to make sure that I am not matching rows of the form id=(p**a)*(q**b)*(r**c)*(s**d)*(t**e)... From what I can tell, the following MySQL query accomplishes this, but I would like to do it through the Django ORM. I also don't know if this MySQL query is the proper way to go about doing things. SELECT h.*,count(f.height_id) AS factorsCount FROM height AS h LEFT JOIN factors AS f ON ( f.height_id = h.id AND f.height_id IN (SELECT height_id FROM factors where prime_id=1 AND factor=2 AND degree=1) AND f.height_id IN (SELECT height_id FROM factors where prime_id=2 AND factor=3 AND degree=2) AND f.height_id IN (SELECT height_id FROM factors where prime_id=3 AND factor=5 AND degree=1) AND f.height_id IN (SELECT height_id FROM factors where prime_id=4 AND factor=7 ANd degree=1) ) GROUP BY h.id HAVING factorsCount=4 ORDER BY h.id; Any ideas or suggestions for things to try?

    Read the article

  • PostgreSQL and PHP forms

    - by MrEnder
    Ok I have a PostgreSQL server with a Database titled brittains_db that I only have PuTTY access to. I can also upload via FTP to the web server which has access to PostgreSQL and the Database somehow... I have made a SQL file named logins.sql CREATE TABLE logins( userName VARCHAR(25) NOT NULL PRIMARY KEY, password VARCHAR(25) NOT NULL, firstName NOT NULL, lastName NOT NULL, ageDay INTEGER NOT NULL, ageMonth INTEGER NOT NULL, ageYear INTEGER NOT NULL, email VARCHAR(255) NOT NULL, createDate DATE ) Then I made a form to get all that information. <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post" > <table> <tr> <td class="signupTd"> First Name:&nbsp; </td> <td> <input type="text" name="firstNameSignupForm" value="<?php echo $firstNameSignup; ?>" size="20"/> </td> <td> <?php echo $firstNameSignupError; ?> </td> </tr> ... code continues I had it save all the information in variables if page run on POST $firstNameSignup=trim($_POST["firstNameSignupForm"]); $lastNameSignup=trim($_POST["lastNameSignupForm"]); $userNameSignup=trim($_POST["userNameSignupForm"]); $passwordSignup=trim($_POST["passwordSignupForm"]); $passwordConfirmSignup=trim($_POST["passwordConfirmSignupForm"]); $monthSignup=trim($_POST["monthSignupForm"]); $daySignup=trim($_POST["daySignupForm"]); $yearSignup=trim($_POST["yearSignupForm"]); $emailSignup=trim($_POST["emailSignupForm"]); $emailConfirmSignup=trim($_POST["emailConfirmSignupForm"]); All information was then validated Now comes the points where I need to upload it to PostgreSQL How do I put my table in Postgre? How do I insert my information into my table? and how would I recall that information to display it?

    Read the article

  • Contract developer trying to get outsourcing contract with current client.

    - by Mike
    I work for a major bank as a contract software developer. I've been there a few months, and without exception this place has the worst software practices I've ever seen. The software my team makes has no formal testing, terrible code (not reusable, hard to read, etc), minimal documentation, no defined development process and an absolutely sickening amount of waste due to bureaucratic overhead. Part of my contract is to maintain a group of thousands of very poorly written batch jobs. When one of the jobs fails (read: crashes), it's a developers job to look at the source, figure out what's wrong, fix it, and check it in. There is no quality assurance process or auditing of the results whatsoever. Once the developer says "it works" a manager signs off and it goes into production. What's disturbing is that these jobs essentially grab market data and put it into a third-party risk management system, which provides the bank with critical intelligence. I've discovered the disturbing truth that this has been happening since the 90s and nobody really has evidence the system is getting the correct data! Without going into details, an issue arose on Friday that was so horrible I actually stormed out of the place. I was ready to quit, but I decided to just get out to calm my nerves and possibly go back Monday. I've been reflecting today on how to handle this. I have realized that, in probably less than 6 months, I could (with 2 other developers) remake a large component of this system. The new system would provide them with, as primary benefits, a maintainable codebase less prone to error and a solid QA framework. To do it properly I would have to be outside the bank, the internal bureaucracy is just too much. And moreover, I think a bank is fundamentally not a place that can make good software. This is my plan. Write a report explaining in depth all the problems with their current system Explain why their software practices fail and generate a tremendous amount of error and waste. Use this as the basis for claiming the project must be developed externally. Write a high level development plan, including what resources I will require Hand 1,2,3 to my manager, hopefully he passes it up the chain. Worst case he fires me, but this isn't so bad. Convinced Executive decides to award my company a contract for the new system I have 8 years experience as a software contractor and have delivered my share of successful software products, but all working in-house for small/medium sized companies. When I read this over, I think I have a dynamite plan. But since this is the first time doing something this bold so I have my doubts. My question is, is this a good idea? If you think not, please spare no detail.

    Read the article

  • Why does this pdo::mysql code crash on windows??

    - by user154107
    Why does this pdo::mysql code crash on windows??? <?php $username = "root"; $password = ""; try { $dsn = "mysql:host=localhost;dbname=employees"; $dbh = new PDO($dsn, $username, $password); $dbh->setAttribute(PDO::ATTR_EMULATE_PREPARES, FALSE); $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); echo "Connected to database<br />" ; $dbh->exec("DROP TABLE IF EXISTS vCard;"); $dbh->exec("DROP TABLE IF EXISTS emp;"); $table = "CREATE TABLE vCard( id INT(4) NOT NULL PRIMARY KEY, firstName VARCHAR (255), lastName VARCHAR (255), office VARCHAR (255), homePh VARCHAR (13), mobilePh VARCHAR (13))"; $dbh->exec($table); $dbh->beginTransaction(); $dbh->exec("INSERT INTO vCard(id, firstName, lastName, office, homePh, mobilePh) VALUES (4834, 'Randy', 'Lewis', 'SR. Front End Developer', '631-842-3375', '917-435-2245');"); $dbh->exec("INSERT INTO vCard(id, firstName, lastName, office, homePh, mobilePh) VALUES (0766, 'Frank', 'LaGuy', 'Graphic Designer', '631-789-8244', '917-324-9897');"); $dbh->exec("INSERT INTO vCard(id, firstName, lastName, office, homePh, mobilePh) VALUES (6684, 'Donnie', 'Dolemite', 'COO', '631-789-9482', '917-234-1222');"); $dbh->exec("INSERT INTO vCard(id, firstName, lastName, office, homePh, mobilePh) VALUES (8569, '', 'McLovin', 'Actor', '631-842-9786', '917-987-8944');"); $dbh->commit(); echo "Data entered successfully<br/><br/>"; $sql = "SELECT * FROM vCard"; // WHERE firstName = 'Donnie'"; $results = $dbh->query($sql); foreach ($results as $id){ echo "SSN: ". $id['id']." "; echo "First Name: ". $id['firstName']." "; echo "Last Name: ". $id['lastName']."<br/>"; } } catch (PDOException $e) { echo "Failed: " . $e->getMessage(); $dbh->rollback(); } ?> basically this line of code is what triggers Apache to crash.. $sql = "SELECT * FROM vCard"; If I try to select one value like 'id' it'll ... when I try to select more than one value "*" it crashes??????

    Read the article

  • Can LINQ-to-SQL omit unspecified columns on insert so a database default value is used?

    - by Todd Ropog
    I have a non-nullable database column which has a default value set. When inserting a row, sometimes a value is specified for the column, sometimes one is not. This works fine in TSQL when the column is omitted. For example, given the following table: CREATE TABLE [dbo].[Table1]( [id] [int] IDENTITY(1,1) NOT NULL, [col1] [nvarchar](50) NOT NULL, [col2] [nvarchar](50) NULL, CONSTRAINT [PK_Table1] PRIMARY KEY CLUSTERED ([id] ASC) ) GO ALTER TABLE [dbo].[Table1] ADD CONSTRAINT [DF_Table1_col1] DEFAULT ('DB default') FOR [col1] The following two statements will work: INSERT INTO Table1 (col1, col2) VALUES ('test value', '') INSERT INTO Table1 (col2) VALUES ('') In the second statement, the default value is used for col1. The problem I have is when using LINQ-to-SQL (L2S) with a table like this. I want to produce the same behavior, but I can't figure out how to make L2S do that. I want to be able to run the following code and have the first row get the value I specify and the second row get the default value from the database: var context = new DataClasses1DataContext(); var row1 = new Table1 { col1 = "test value", col2 = "" }; context.Table1s.InsertOnSubmit(row1); context.SubmitChanges(); var row2 = new Table1 { col2 = "" }; context.Table1s.InsertOnSubmit(row2); context.SubmitChanges(); If the Auto Generated Value property of col1 is False, the first row is created as desired, but the second row fails with a null error on col1. If Auto Generated Value is True, both rows are created with the default value from the database. I've tried various combinations of Auto Generated Value, Auto-Sync and Nullable, but nothing I've tried gives the behavior I want. L2S does not omit the column from the insert statement when no value is specified. Instead it does something like this: INSERT INTO Table1 (col1, col2) VALUES (null, '') ...which of course causes a null error on col1. Is there some way to get L2S to omit a column from the insert statement if no value is given? Or is there some other way to get the behavior I want? I need the default value at the database level because not all row inserts are done via L2S, and in some cases the default value is a little more complex than a hard coded value (e.g. creating the default based on another field) so I'd rather avoid duplicating that logic.

    Read the article

  • Non-linear regression models in PostgreSQL using R

    - by Dave Jarvis
    Background I have climate data (temperature, precipitation, snow depth) for all of Canada between 1900 and 2009. I have written a basic website and the simplest page allows users to choose category and city. They then get back a very simple report (without the parameters and calculations section): The primary purpose of the web application is to provide a simple user interface so that the general public can explore the data in meaningful ways. (A list of numbers is not meaningful to the general public, nor is a website that provides too many inputs.) The secondary purpose of the application is to provide climatologists and other scientists with deeper ways to view the data. (Using too many inputs, of course.) Tool Set The database is PostgreSQL with R (mostly) installed. The reports are written using iReport and generated using JasperReports. Poor Model Choice Currently, a linear regression model is applied against annual averages of daily data. The linear regression model is calculated within a PostgreSQL function as follows: SELECT regr_slope( amount, year_taken ), regr_intercept( amount, year_taken ), corr( amount, year_taken ) FROM temp_regression INTO STRICT slope, intercept, correlation; The results are returned to JasperReports using: SELECT year_taken, amount, year_taken * slope + intercept, slope, intercept, correlation, total_measurements INTO result; JasperReports calls into PostgreSQL using the following parameterized analysis function: SELECT year_taken, amount, measurements, regression_line, slope, intercept, correlation, total_measurements, execute_time FROM climate.analysis( $P{CityId}, $P{Elevation1}, $P{Elevation2}, $P{Radius}, $P{CategoryId}, $P{Year1}, $P{Year2} ) ORDER BY year_taken This is not an optimal solution because it gives the false impression that the climate is changing at a slow, but steady rate. Questions Using functions that take two parameters (e.g., year [X] and amount [Y]), such as PostgreSQL's regr_slope: What is a better regression model to apply? What CPAN-R packages provide such models? (Installable, ideally, using apt-get.) How can the R functions be called within a PostgreSQL function? If no such functions exist: What parameters should I try to obtain for functions that will produce the desired fit? How would you recommend showing the best fit curve? Keep in mind that this is a web app for use by the general public. If the only way to analyse the data is from an R shell, then the purpose has been defeated. (I know this is not the case for most R functions I have looked at so far.) Thank you!

    Read the article

  • Why is numpy's einsum faster than numpy's built in functions?

    - by Ophion
    Lets start with three arrays of dtype=np.double. Timings are performed on a intel CPU using numpy 1.7.1 compiled with icc and linked to intel's mkl. A AMD cpu with numpy 1.6.1 compiled with gcc without mkl was also used to verify the timings. Please note the timings scale nearly linearly with system size and are not due to the small overhead incurred in the numpy functions if statements these difference will show up in microseconds not milliseconds: arr_1D=np.arange(500,dtype=np.double) large_arr_1D=np.arange(100000,dtype=np.double) arr_2D=np.arange(500**2,dtype=np.double).reshape(500,500) arr_3D=np.arange(500**3,dtype=np.double).reshape(500,500,500) First lets look at the np.sum function: np.all(np.sum(arr_3D)==np.einsum('ijk->',arr_3D)) True %timeit np.sum(arr_3D) 10 loops, best of 3: 142 ms per loop %timeit np.einsum('ijk->', arr_3D) 10 loops, best of 3: 70.2 ms per loop Powers: np.allclose(arr_3D*arr_3D*arr_3D,np.einsum('ijk,ijk,ijk->ijk',arr_3D,arr_3D,arr_3D)) True %timeit arr_3D*arr_3D*arr_3D 1 loops, best of 3: 1.32 s per loop %timeit np.einsum('ijk,ijk,ijk->ijk', arr_3D, arr_3D, arr_3D) 1 loops, best of 3: 694 ms per loop Outer product: np.all(np.outer(arr_1D,arr_1D)==np.einsum('i,k->ik',arr_1D,arr_1D)) True %timeit np.outer(arr_1D, arr_1D) 1000 loops, best of 3: 411 us per loop %timeit np.einsum('i,k->ik', arr_1D, arr_1D) 1000 loops, best of 3: 245 us per loop All of the above are twice as fast with np.einsum. These should be apples to apples comparisons as everything is specifically of dtype=np.double. I would expect the speed up in an operation like this: np.allclose(np.sum(arr_2D*arr_3D),np.einsum('ij,oij->',arr_2D,arr_3D)) True %timeit np.sum(arr_2D*arr_3D) 1 loops, best of 3: 813 ms per loop %timeit np.einsum('ij,oij->', arr_2D, arr_3D) 10 loops, best of 3: 85.1 ms per loop Einsum seems to be at least twice as fast for np.inner, np.outer, np.kron, and np.sum regardless of axes selection. The primary exception being np.dot as it calls DGEMM from a BLAS library. So why is np.einsum faster that other numpy functions that are equivalent? The DGEMM case for completeness: np.allclose(np.dot(arr_2D,arr_2D),np.einsum('ij,jk',arr_2D,arr_2D)) True %timeit np.einsum('ij,jk',arr_2D,arr_2D) 10 loops, best of 3: 56.1 ms per loop %timeit np.dot(arr_2D,arr_2D) 100 loops, best of 3: 5.17 ms per loop The leading theory is from @sebergs comment that np.einsum can make use of SSE2, but numpy's ufuncs will not until numpy 1.8 (see the change log). I believe this is the correct answer, but have not been able to confirm it. Some limited proof can be found by changing the dtype of input array and observing speed difference and the fact that not everyone observes the same trends in timings.

    Read the article

  • Django conditional template inheritance

    - by Ed
    I have template that displays object elements with hyperlinks to other parts of my site. I have another function that displays past versions of the same object. In this display, I don't want the hyperlinks. I'm under the assumption that I can't dynamically switch off the hyperlinks, so I've included both versions in the same template. I use an if statement to either display the hyperlinked version or the plain text version. I prefer to keep them in the same template because if I need to change the format of one, it will be easy to apply it to the other right there. The template extends framework.html. Framework has a breadcrumb system and it extends base.html. Base has a simple top menu system. So here's my dilemma. When viewing the standard hyperlink data, I want to see the top menu and the breadcrumbs. But when viewing the past version plain text data, I only want the data, no menu, no breadcrumbs. I'm unsure if this is possible given my current design. I tried having framework inherit the primary template so that I could choose to call either framework (and display the breadcrumbs), or the template itself, thus skipping the breadcrumbs, but I want framework.html available for other templates as well. If framework.html extends a specific template, I lose the ability to display it in other templates. I tried writing an if statement that would display a the top_menu block and the nav_menu block from base.html and framework.html respectively. This would overwrite their blocks and allow me to turn off those elements conditional on the if. Unfortunately, it doesn't appear to be conditional; if the block elements are in the template at all, surrounded by an if or not, I lose the menus. I thought about using {% include %} to pick up the breadcrumbs and a split out top menu. In that case though, I'll have to include it all the time. No more inheritance. Is this the best option given my requirement?

    Read the article

  • VB6 app not executing as scheduled task unless user is logged on

    - by Tedd Hansen
    Hi Would greatly apprechiate some help on this one! It may be a tricky one. :) Problem I have an VB6 application which is set up as scheduled task. It starts every time, but when executing CreateObject it fails if user is not logged on to computer. I am looking for information on what could cause this. Primary suspicion is that some Windows API fails. Key points Behaviour confirmed on Windows 2000, 2003, 2008 and Vista. The application executes as user X at scheduled time, executed by Windows Task Scheduler. It executes every time. Application does start! -- If user X is logged on via RDP it runs perfectly. (Note that user doesn't need to be connected, only logged on) -- If user X is not logged on to computer the application fails. Failure point Application fails when using CreateObject() to instansiate a DCOM object which is also part of the application. The DCOM objects declare .dll-references at startup (globally/on top of .bas-file) and run a small startup function. Failure must be during startup, possibly in one of the .dll-declarations. Thoughts After some Googling my initial suspicion was directed at MAPI. From what I could see MAPI required user to be logged on. The application has MAPI references. But even with all MAPI references removed it still does not work. What is the difference if an user is logged on? Registry mapping? Environment? explorer.exe is running. Isn't the user logged on when application executes as the user? What info would help? A definitive answer would be truly great. Any information regarding any VB6 feature/Windows API that could act differently depending on wether user is logged on or not would definitively help. Similar experiences may lead me in the right direction. Tips on debuggin this. Thanks! :)

    Read the article

  • Why is Oracle using a skip scan for this query?

    - by Jason Baker
    Here's the tkprof output for a query that's running extremely slowly (WARNING: it's long :-) ): SELECT mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn FROM (SELECT /*+ FIRST_ROWS(1) */ mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn, ROWNUM AS ora_rn FROM (SELECT mbr.comment_idn AS mbr_comment_idn, mbr.crt_dt AS mbr_crt_dt, mbr.data_source AS mbr_data_source, mbr.dol_bl_rmo_ind AS mbr_dol_bl_rmo_ind, mbr.dxcg_ctl_member AS mbr_dxcg_ctl_member, mbr.employment_start_dt AS mbr_employment_start_dt, mbr.employment_term_dt AS mbr_employment_term_dt, mbr.entity_active AS mbr_entity_active, mbr.ethnicity_idn AS mbr_ethnicity_idn, mbr.general_health_status_code AS mbr_general_health_status_code, mbr.hand_dominant_code AS mbr_hand_dominant_code, mbr.hgt_feet AS mbr_hgt_feet, mbr.hgt_inches AS mbr_hgt_inches, mbr.highest_edu_level AS mbr_highest_edu_level, mbr.insd_addr_idn AS mbr_insd_addr_idn, mbr.insd_alt_id AS mbr_insd_alt_id, mbr.insd_name AS mbr_insd_name, mbr.insd_ssn_tin AS mbr_insd_ssn_tin, mbr.is_smoker AS mbr_is_smoker, mbr.is_vip AS mbr_is_vip, mbr.lmbr_first_name AS mbr_lmbr_first_name, mbr.lmbr_last_name AS mbr_lmbr_last_name, mbr.marital_status_cd AS mbr_marital_status_cd, mbr.mbr_birth_dt AS mbr_mbr_birth_dt, mbr.mbr_death_dt AS mbr_mbr_death_dt, mbr.mbr_expired AS mbr_mbr_expired, mbr.mbr_first_name AS mbr_mbr_first_name, mbr.mbr_gender_cd AS mbr_mbr_gender_cd, mbr.mbr_idn AS mbr_mbr_idn, mbr.mbr_ins_type AS mbr_mbr_ins_type, mbr.mbr_isreadonly AS mbr_mbr_isreadonly, mbr.mbr_last_name AS mbr_mbr_last_name, mbr.mbr_middle_name AS mbr_mbr_middle_name, mbr.mbr_name AS mbr_mbr_name, mbr.mbr_status_idn AS mbr_mbr_status_idn, mbr.mpi_id AS mbr_mpi_id, mbr.preferred_am_pm AS mbr_preferred_am_pm, mbr.preferred_time AS mbr_preferred_time, mbr.prv_innetwork AS mbr_prv_innetwork, mbr.rep_addr_idn AS mbr_rep_addr_idn, mbr.rep_name AS mbr_rep_name, mbr.rp_mbr_id AS mbr_rp_mbr_id, mbr.same_mbr_ins AS mbr_same_mbr_ins, mbr.special_needs_cd AS mbr_special_needs_cd, mbr.timezone AS mbr_timezone, mbr.upd_dt AS mbr_upd_dt, mbr.user_idn AS mbr_user_idn, mbr.wgt AS mbr_wgt, mbr.work_status_idn AS mbr_work_status_idn FROM mbr JOIN mbr_identfn ON mbr.mbr_idn = mbr_identfn.mbr_idn WHERE mbr_identfn.mbr_idn = mbr.mbr_idn AND mbr_identfn.identfd_type = :identfd_type_1 AND mbr_identfn.identfd_number = :identfd_number_1 AND mbr_identfn.entity_active = :entity_active_1) WHERE ROWNUM <= :ROWNUM_1) WHERE ora_rn > :ora_rn_1 call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 9936 0.46 0.49 0 0 0 0 Execute 9936 0.60 0.59 0 0 0 0 Fetch 9936 329.87 404.00 0 136966922 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 29808 330.94 405.09 0 136966922 0 0 Misses in library cache during parse: 0 Optimizer mode: FIRST_ROWS Parsing user id: 36 (JIVA_DEV) Rows Row Source Operation ------- --------------------------------------------------- 0 VIEW (cr=102 pr=0 pw=0 time=2180 us) 0 COUNT STOPKEY (cr=102 pr=0 pw=0 time=2163 us) 0 NESTED LOOPS (cr=102 pr=0 pw=0 time=2152 us) 0 INDEX SKIP SCAN IDX_MBR_IDENTFN (cr=102 pr=0 pw=0 time=2140 us)(object id 341053) 0 TABLE ACCESS BY INDEX ROWID MBR (cr=0 pr=0 pw=0 time=0 us) 0 INDEX UNIQUE SCAN PK_CLAIMANT (cr=0 pr=0 pw=0 time=0 us)(object id 334044) Rows Execution Plan ------- --------------------------------------------------- 0 SELECT STATEMENT MODE: HINT: FIRST_ROWS 0 VIEW 0 COUNT (STOPKEY) 0 NESTED LOOPS 0 INDEX MODE: ANALYZED (SKIP SCAN) OF 'IDX_MBR_IDENTFN' (INDEX (UNIQUE)) 0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'MBR' (TABLE) 0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PK_CLAIMANT' (INDEX (UNIQUE)) ******************************************************************************** Based on my reading of Oracle's documentation of skip scans, a skip scan is most useful when the first column of an index has a low number of unique values. The thing is that the first index of this column is a unique primary key. So am I correct in assuming that a skip scan is the wrong thing to do here? Also, what kind of scan should it be doing? Should I do some more hinting for this query? EDIT: I should also point out that the query's where clause uses the columns in IDX_MBR_IDENTFN and no columns other than what's in that index. So as far as I can tell, I'm not skipping any columns.

    Read the article

  • Android: database reading problem throws exception

    - by Vamsi
    Hi, i am having this problem with the android database. I adopted the DBAdapter file the NotepadAdv3 example from the google android page. DBAdapter.java public class DBAdapter { private static final String TAG = "DBAdapter"; private static final String DATABASE_NAME = "PasswordDb"; private static final String DATABASE_TABLE = "myuserdata"; private static final String DATABASE_USERKEY = "myuserkey"; private static final int DATABASE_VERSION = 2; public static final String KEY_USERKEY = "userkey"; public static final String KEY_TITLE = "title"; public static final String KEY_DATA = "data"; public static final String KEY_ROWID = "_id"; private final Context mContext; private DatabaseHelper mDbHelper; private SQLiteDatabase mDb; private static final String DB_CREATE_KEY = "create table " + DATABASE_USERKEY + " (" + "userkey text not null" +");"; private static final String DB_CREATE_DATA = "create table " + DATABASE_TABLE + " (" + "_id integer primary key autoincrement, " + "title text not null" + "data text" +");"; private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(DB_CREATE_KEY); db.execSQL(DB_CREATE_DATA); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { Log.w(TAG, "Upgrading database from version " + oldVersion + " to " + newVersion + ", which will destroy all old data"); db.execSQL("DROP TABLE IF EXISTS myuserkey"); db.execSQL("DROP TABLE IF EXISTS myuserdata"); onCreate(db); } } public DBAdapter(Context ctx) { this.mContext = ctx; } public DBAdapter Open() throws SQLException{ try { mDbHelper = new DatabaseHelper(mContext); } catch(Exception e){ Log.e(TAG, e.toString()); } mDb = mDbHelper.getWritableDatabase(); return this; } public void close(){ mDbHelper.close(); } public Long storeKey(String userKey){ ContentValues initialValues = new ContentValues(); initialValues.put(KEY_USERKEY, userKey); try { mDb.delete(DATABASE_USERKEY, "1=1", null); } catch(Exception e) { Log.e(TAG, e.toString()); } return mDb.insert(DATABASE_USERKEY, null, initialValues); } public String retrieveKey() { final Cursor c; try { c = mDb.query(DATABASE_USERKEY, new String[] { KEY_USERKEY}, null, null, null, null, null); }catch(Exception e){ Log.e(TAG, e.toString()); return ""; } if(c.moveToFirst()){ return c.getString(0); } else{ Log.d(TAG, "UserKey Empty"); } return ""; } //not including any function related to "myuserdata" table } Class1.java { mUserKey = mDbHelper.retrieveKey(); mDbHelper.storeKey(Key); } the error that i am receiving is from Log.e(TAG, e.toString()) in the methods retrieveKey() and storeKey() "no such table: myuserkey: , while compiling: SELECT userkey FROM myuserkey"

    Read the article

  • Problem with ColdFusion communicating with MySQL database

    - by Greg
    Hi, I have been working to migrate a non-profit website from a local server (running Windows XP) to a GoDaddy hosting account (running Linux). Most of the pages are written in ColdFusion. Things have gone smoothly, up until this point. There is a flash form within the site (see this page: http://www.preservenet.cornell.edu/employ/submitjob.cfm) which, when completed, takes the visitor to this page: submitjobaction.cfm . I'm not quite sure what to make of this error, since I copied exactly what had been in the old MySQL database, and the .cfm files are exactly as they had been when they worked on the old server. Am I missing something? Below is the code from the database that the error seems to be referring to. When I change "Positionlat" to some default value in the database as it suggests in the error, it says that another field needs a default value, and it's a neverending chain of errors as I try to correct it. This is probably a stupid error that I'm missing, but I've been working at it for days and can't find what I'm missing. I would really appreciate any help. Thanks! -Greg DROP TABLE IF EXISTS employopp; CREATE TABLE employopp ( POSTID int(10) NOT NULL auto_increment, USERID varchar(10) collate latin1_general_ci default NULL, STATUS varchar(10) collate latin1_general_ci NOT NULL default 'ACTIVE', TYPE varchar(50) collate latin1_general_ci default 'professional', JOBTITLE varchar(70) collate latin1_general_ci default NULL, NUMBER varchar(30) collate latin1_general_ci default NULL, SALARY varchar(40) collate latin1_general_ci default NULL, ORGNAME varchar(70) collate latin1_general_ci default NULL, DEPTNAME varchar(70) collate latin1_general_ci default NULL, ORGDETAILS mediumtext character set utf8 collate utf8_unicode_ci, ORGWEBSITE varchar(200) collate latin1_general_ci default NULL, ADDRESS varchar(60) collate latin1_general_ci default 'none given', ADDRESS2 varchar(60) collate latin1_general_ci default NULL, CITY varchar(30) collate latin1_general_ci default NULL, STATE varchar(30) collate latin1_general_ci default NULL, COUNTRY varchar(3) collate latin1_general_ci default 'USA', POSTALCODE varchar(10) collate latin1_general_ci default NULL, EMAIL varchar(75) collate latin1_general_ci default NULL, NOMAIL varchar(5) collate latin1_general_ci default NULL, PHONE varchar(20) collate latin1_general_ci default NULL, FAX varchar(20) collate latin1_general_ci default NULL, WEBSITE varchar(200) collate latin1_general_ci default NULL, POSTDATE varchar(10) collate latin1_general_ci default NULL, POSTUNTIL varchar(20) collate latin1_general_ci default 'select date', POSTUNTILFILLED varchar(20) collate latin1_general_ci NOT NULL default 'until filled', texteHTML mediumtext character set utf8 collate utf8_unicode_ci, HOWTOAPPLY mediumtext character set utf8 collate utf8_unicode_ci, CONFIRSTNM varchar(30) collate latin1_general_ci default NULL, CONLASTNM varchar(60) collate latin1_general_ci default NULL, POSITIONCITY varchar(30) collate latin1_general_ci default NULL, POSITIONSTATE varchar(30) collate latin1_general_ci default NULL, POSITIONCOUNTRY varchar(3) collate latin1_general_ci default 'USA', POSITIONLAT varchar(50) collate latin1_general_ci NOT NULL, POSITIONLNG varchar(50) collate latin1_general_ci NOT NULL, PRIMARY KEY (POSTID) ) ENGINE=MyISAM AUTO_INCREMENT=2007 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci;

    Read the article

  • Gathering Staff anyone interested?

    - by kasene
    Thread Title - Gathering Staff Rush-Soft Game Design is currently looking for staff of a moderate skill level. Team Name - RushSoft Game Design Project Name - N/A We are gathering staff so that we can begin working on a new game. Target Aim - Freeware / Free Version - Paid Version With our first project our aim is to simply get our name out there. Generally we will be targeting a freeware distribution platform or a Free and Paid version. Compensation - Prehaps in the future but don't rely on it If in the future we start developing a game we intend to make any sort of sizable profit from then yes, there will be compensation however currently our low, low funding comes from generous donations. Any money that we make for now will go to the teams funding for things like engine licenses and company registration. Technology - C/C++ RSETech Our primary functional language will be C/C++ as most games are. We will be using a custom built library built on Direct3D called RSETech or RushSoft Engine Technology. Currently its is fully capable of being used for developing a game. The final version is made up of almost entirely C (No C++ or OOP). There is a C++ version currently in the works. Programming: - Microsoft Visual C++ 2008 / 2010 2D Art - Photoshop CS2 - GIMP Talent Needed - We currently are in need of x2 Programmers - With understanding of the following C/C ++ and game programming aspects: -If/Else Conditions -Functions/Methods -Arrays -Pointers (You don't need to fully understand these. Just know when they need to be used.) -Enums -Loops (For and While) -Structs (and How to use . and - syntax) -Classes (and how to call methods and access variables from a class) -State Machines -Switches -Include Guards -Understanding of how game loops work in general. (Init, Update, Render, Deinit) x2 Artists - As long as you have the means to and are able to draw 2D sprites and collab with a game designer to get a good result. 1 or more Game Designers - You can design levels (for platformers) as well as write game scripts and you can come up with good ideas and game mechanics. As long as you can do these things and are able to work well with artists and programmers you're golden. Business Consultant - Someone who knows the industry and how it works. Will inquire about possible distribution platforms as well as contact other developers, websites, and publishers on RushSofts behalf. Team Structure - Kasene Clark - Co-Founder/Lead Programmer/Game Designer Casey W - Co-Founder/Artist(GC/Animation)/Game Designer Nathan Mayworm - Game Designer. Website - RushSoft Websitek Contact - Kasene Clark [email protected] - [email protected] Phone - 12075181967 Feedback - Any Thank You! -Kasene

    Read the article

  • Can a View Controller manage more than 1 nib based view?

    - by Hugo Brynjar
    I have a VC controlling a screen of content that has 2 modes; a normal mode and an edit mode. Can I create a single VC with 2 views, each from separate nibs? In many situations on the iphone, you have a VC which controls an associated view. Then on a button press or other event, a new VC is loaded and its view becomes the top level view etc. But in this situation, I have 2 modes that I want to use the same VC for, because they are closely related. So I want a VC which can swap in/out 2 views. As per here: http://stackoverflow.com/questions/863321/iphone-how-to-load-a-view-using-a-nib-file-created-with-interface-builder/2683153#2683153 I have found that I can load a VC with an associated view from a nib and then later on load a different view from another nib and make that new view the active view. NSArray *nibObjects = [[NSBundle mainBundle] loadNibNamed:@"EditMode" owner:self options:nil]; UIView *theEditView = [nibObjects objectAtIndex:0]; self.editView = theEditView; [self.view addSubview:theEditView]; The secondary nib has outlets wired up to the VC like the primary nib. When the new nib is loaded, the outlets are all connected up fine and everything works nicely. Unfortunately when this edit view is then removed, there doesn't seem to be any elegant way of getting the outlets hooked up again to the (normal mode) view from the original nib. Nib loading and outlet setting seems a once only thing. So, if you want to have a VC that swaps in/out 2 views without creating a new VC, what are the options? 1) You can do everything in code, but I want to use nibs because it makes creating the UI simpler. 2) You have 1 nib for your VC and just hide/show elements using the hidden property of UIView and its subclasses. 3) You load a new nib as described above. This is fine for the new nib, but how do you sort the outlets when you go back to the original nib. 4) Give up and accept having a 1:1 between VCs and nibs. There is a nib for normal mode, a nib for edit mode and each mode has a VC that subclasses a common superclass. In the end, I went with 4) and it works, but requires a fair amount of extra work, because I have a model class that I instantiate in normal mode and then have to pass to the edit mode VC because both modes need access to the model. I'm also using NSTimer and have to start and stop the timer in each mode. It is because of all this shared functionality that I wanted a single VC with 2 nibs in the first place.

    Read the article

  • Do you still limit line length in code?

    - by Noldorin
    This is a matter on which I would like to gauge the opinion of the community: Do you still limit the length of lines of code to a fixed maximum? This was certainly a convention of the past for many languages; one would typically cap the number of characters per line to a value such as 80 (and more recnetly 100 or 120 I believe). As far as I understand, the primary reasons for limiting line length are: Readability - You don't have to scroll over horizontally when you want to see the end of some lines. Printing - Admittedly (at least in my experience), most code that you are working on does not get printed out on paper, but by limiting the number of characters you can insure that formatting doesn't get messed up when printed. Past editors (?) - Not sure about this one, but I suspect that at some point in the distant past of programming, (at least some) text editors may have been based on a fixed-width buffer. I'm sure there are points that I am still missing out, so feel free to add to these... Now, when I tend to observe C or C# code nowadays, I often see a number of different styles, the main ones being: Line length capped to 80, 100, or even 120 characters. As far as I understand, 80 is the traditional length, but the longer ones of 100 and 120 have appeared because of the widespread use of high resolutions and widescreen monitors nowadays. No line length capping at all. This tends to be pretty horrible to read, and I don't see it too often, though it's certainly not too rare either. Inconsistent capping of line length. The length of some lines are limited to a fixed maximum (or even a maximum that changes depending on the file/location in code), while others (possibly comments) are not at all. My personal preference here (at least recently) has been to cap the line length to 100 in the Visual Studio editor. This means that in a decently sized window (on a non-widescreen monitor), the ends of lines are still fully visible. I can however see a few disadvantages in this, especially when you end up writing code that's indented 3 or 4 levels and then having to include a long string literal - though I often take this as a sign to refactor my code! In particular, I am curious what the C and C# coders (or anyone who uses Visual Studio for that matter) think about this point, though I would be interested in hearing anyone's thoughts on the subject. Edit Thanks for the all answers - I appreciate the variety of opinions here, all presenting sound reasons. Consensus does seem to be tipping in the direction of always (or almost always) limit the line length. Interestingly, it seems to be in various coding standards to limit the line length. Judging by some of the answers, both the Python and Google CPP guidelines set the limit at 80 chars. I haven't seen anything similar regarding C# or VB.NET, but I would be curious to see if there are ones anywhere.

    Read the article

  • Help with ListView Databse

    - by Weston Dunn
    I am having issues @ run with this code: App Force Closing.. Sprinter.Java import android.app.ListActivity; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.os.Bundle; import android.widget.ListAdapter; import android.widget.SimpleCursorAdapter; public class Sprinter extends ListActivity { /** Called when the activity is first created. */ final static String MY_DB_NAME = "Sprinter"; final static String MY_DB_TABLE = "Stations"; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); SQLiteDatabase myDB = null; try { myDB = this.openOrCreateDatabase(MY_DB_NAME, MODE_PRIVATE, null); myDB.execSQL("CREATE TABLE IF NOT EXISTS " + MY_DB_TABLE + "_id integer primary key autoincrement, name varchar(100);"); myDB.execSQL("INSERT INTO " + MY_DB_TABLE + " (_id, name)" + " VALUES ('', 'Oceanside Transit Center');"); myDB.execSQL("INSERT INTO " + MY_DB_TABLE + " (_id, name)" + " VALUES ('', 'Coast Highway');"); Cursor mCursor = myDB.rawQuery("SELECT name" + " FROM " + MY_DB_TABLE, null); startManagingCursor(mCursor); ListAdapter adapter = new SimpleCursorAdapter(this, R.layout.list_item, mCursor, new String[] { "name" }, new int[] { R.id.Name }); this.setListAdapter(adapter); this.getListView().setTextFilterEnabled(true); } finally { if (myDB != null) { myDB.close(); } } } } main.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <ListView android:id="@id/android:list" android:layout_width="wrap_content" android:layout_height="wrap_content"> </ListView> <TextView android:id="@id/android:empty" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="No Data" /> </LinearLayout> list_item.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout android:id="@+id/LinearLayout" android:layout_width="fill_parent" android:layout_height="fill_parent" xmlns:android="http://schemas.android.com/apk/res/android"> <TextView android:id="@+id/Name" android:layout_width="wrap_content" android:layout_height="wrap_content"> </TextView> </LinearLayout>

    Read the article

  • Linq-to-SQL: How to perform a count on a sub-select

    - by Peter Bridger
    I'm still trying to get my head round how to use LINQ-to-SQL correctly, rather than just writing my own sprocs. In the code belong a userId is passed into the method, then LINQ uses this to get all rows from the GroupTable tables matching the userId. The primary key of the GroupUser table is GroupUserId, which is a foreign key in the Group table. /// <summary> /// Return summary details about the groups a user belongs to /// </summary> /// <param name="userId"></param> /// <returns></returns> public List<Group> GroupsForUser(int userId) { DataAccess.KINv2DataContext db = new DataAccess.KINv2DataContext(); List<Group> groups = new List<Group>(); groups = (from g in db.Groups join gu in db.GroupUsers on g.GroupId equals gu.GroupId where g.Active == true && gu.UserId == userId select new Group { Name = g.Name, CreatedOn = g.CreatedOn }).ToList<Group>(); return groups; } } This works fine, but I'd also like to return the total number of Users who are in a group and also the total number of Contacts that fall under ownership of the group. Pseudo code ahoy! /// <summary> /// Return summary details about the groups a user belongs to /// </summary> /// <param name="userId"></param> /// <returns></returns> public List<Group> GroupsForUser(int userId) { DataAccess.KINv2DataContext db = new DataAccess.KINv2DataContext(); List<Group> groups = new List<Group>(); groups = (from g in db.Groups join gu in db.GroupUsers on g.GroupId equals gu.GroupId where g.Active == true && gu.UserId == userId select new Group { Name = g.Name, CreatedOn = g.CreatedOn, // ### This is the SQL I would write to get the data I want ### MemberCount = ( SELECT COUNT(*) FROM GroupUser AS GU WHERE GU.GroupId = g.GroupId ), ContactCount = ( SELECT COUNT(*) FROM Contact AS C WHERE C.OwnerGroupId = g.GroupId ) // ### End of extra code ### }).ToList<Group>(); return groups; } }

    Read the article

  • Oracle Coding Standards Feature Implementation

    - by Mike Hofer
    Okay, I have reached a sort of an impasse. In my open source project, a .NET-based Oracle database browser, I've implemented a bunch of refactoring tools. So far, so good. The one feature I was really hoping to implement was a big "Global Reformat" that would make the code (scripts, functions, procedures, packages, views, etc.) standards compliant. (I've always been saddened by the lack of decent SQL refactoring tools, and wanted to do something about it.) Unfortunatey, I am discovering, much to my chagrin, that there doesn't seem to be any one widely-used or even "generally accepted" standard for PL-SQL. That kind of puts a crimp on my implementation plans. My search has been fairly exhaustive. I've found lots of conflicting documents, threads and articles and the opinions are fairly diverse. (Comma placement, of all things, seems to generate quite a bit of debate.) So I'm faced with a couple of options: Add a feature that lets the user customize the standard and then reformat the code according to that standard. —OR— Add a feature that lets the user customize the standard and simply generate a violations list like StyleCop does, leaving the SQL untouched. In my mind, the first option saves the end-users a lot of work, but runs the risk of modifying SQL in potentially unwanted ways. The second option runs the risk of generating lots of warnings and doing no work whatsoever. (It'd just be generally annoying.) In either scenario, I still have no standard to go by. What I'd need to know from you guys is kind of poll-ish, but kind of not. If you were going to use a tool of this nature, what parts of your SQL code would you want it to warn you about or fix? Again, I'm just at a loss due to a lack of a cohesive standard. And given that there isn't anything out there that's officially published by Oracle, I think this is something the community could weigh in on. Also, given the way that voting works on SO, the votes would help to establish the popularity of a given "refactoring." P.S. The engine parses SQL into an expression tree so it can robustly analyze the SQL and reformat it. There should be quite a bit that we can do to correct the format of the SQL. But I am thinking that for the first release of the thing, layout is the primary concern. Though it is worth noting that the thing already has refactorings for converting keywords to upper case, and identifiers to lower case.

    Read the article

  • Experienced developer trying to get outsourcing contract with current client.

    - by Mike
    I work for a major bank as a contract software developer. I've been there a few months, and without exception this place has the worst software practices I've ever seen. The software my team makes has no formal testing, terrible code (not reusable, hard to read, etc), minimal documentation, no defined development process and an absolutely sickening amount of waste due to bureaucratic overhead. Part of my contract is to maintain a group of thousands of very poorly written batch jobs. When one of the jobs fails (read: crashes), it's a developers job to look at the source, figure out what's wrong, fix it, and check it in. There is no quality assurance process or auditing of the results whatsoever. Once the developer says "it works" a manager signs off and it goes into production. What's disturbing is that these jobs essentially grab market data and put it into a third-party risk management system, which provides the bank with critical intelligence. I've discovered the disturbing truth that this has been happening since the 90s and nobody really has evidence the system is getting the correct data! Without going into details, an issue arose on Friday that was so horrible I actually stormed out of the place. I was ready to quit, but I decided to just get out to calm my nerves and possibly go back Monday. I've been reflecting today on how to handle this. I have realized that, in probably less than 6 months, I could (with 2 other developers) remake a large component of this system. The new system would provide them with, as primary benefits, a maintainable codebase less prone to error and a solid QA framework. To do it properly I would have to be outside the bank, the internal bureaucracy is just too much. And moreover, I think a bank is fundamentally not a place that can make good software. This is my plan. Write a report explaining in depth all the problems with their current system Explain why their software practices fail and generate a tremendous amount of error and waste. Use this as the basis for claiming the project must be developed externally. Write a high level development plan, including what resources I will require Hand 1,2,3 to my manager, hopefully he passes it up the chain. Worst case he fires me, but this isn't so bad. Convinced Executive decides to award my company a contract for the new system I have 8 years experience as a software contractor and have delivered my share of successful software products, but all working in-house for small/medium sized companies. When I read this over, I think I have a dynamite plan. But since this is the first time doing something this bold so I have my doubts. My question is, is this a good idea? If you think not, please spare no detail.

    Read the article

  • Search for multiple values in an xml column

    - by Yuriy Gettya
    Environment: SQL Server 2012. Primary and secondary (value) index is built on xml column. Say I have a table Message with xml column WordIndex. I also have a table Word which has WordId and WordText. Xml for Message.WordIndex has the following schema: <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.example.com"> <xs:element name="wi"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="unbounded" name="w"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="unbounded" name="p" type="xs:unsignedByte" /> </xs:sequence> <xs:attribute name="wid" type="xs:unsignedByte" use="required" /> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> and some data to go with it: <wi xmlns="http://www.example.com"> <w wid="1"> <p>28</p> <p>72</p> <p>125</p> </w> <w wid="4"> <p>89</p> </w> <w wid="5"> <p>11</p> </w> </wi> I need to search for multiple values in my xml column WordIndex either using OR or AND. What I'm doing is fairly rudimentary, since I'm a n00b in XQuery (taken from debug output, hence real values): with xmlnamespaces(default 'http://www.example.com') select m.Subject, m.MessageId, m.WordIndex.query(' let $dummy := 0 return <word_list> { for $w in /wi/w where $w/@wid=64 return <word wid="64" pos="{data($w/p)}"/> } { for $w in /wi/w where $w/@wid=70 return <word wid="70" pos="{data($w/p)}"/> } { for $w in /wi/w where $w/@wid=63 return <word wid="63" pos="{data($w/p)}"/> } </word_list> ') as WordPosition from Message as m -- more joins go here ... where -- more conditions go here ... and m.WordIndex.exist('/wi/w[@wid=64]') = 1 and m.WordIndex.exist('/wi/w[@wid=70]') = 1 and m.WordIndex.exist('/wi/w[@wid=63]') = 1 How can this be optimized?

    Read the article

  • How to optimize my PostgreSQL DB for prefix search?

    - by asmaier
    I have a table called "nodes" with roughly 1.7 million rows in my PostgreSQL db =#\d nodes Table "public.nodes" Column | Type | Modifiers --------+------------------------+----------- id | integer | not null title | character varying(256) | score | double precision | Indexes: "nodes_pkey" PRIMARY KEY, btree (id) I want to use information from that table for autocompletion of a search field, showing the user a list of the ten titles having the highest score fitting to his input. So I used this query (here searching for all titles starting with "s") =# explain analyze select title,score from nodes where title ilike 's%' order by score desc; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------- Sort (cost=64177.92..64581.38 rows=161385 width=25) (actual time=4930.334..5047.321 rows=161264 loops=1) Sort Key: score Sort Method: external merge Disk: 5712kB -> Seq Scan on nodes (cost=0.00..46630.50 rows=161385 width=25) (actual time=0.611..4464.413 rows=161264 loops=1) Filter: ((title)::text ~~* 's%'::text) Total runtime: 5260.791 ms (6 rows) This was much to slow for using it with autocomplete. With some information from Using PostgreSQL in Web 2.0 Applications I was able to improve that with a special index =# create index title_idx on nodes using btree(lower(title) text_pattern_ops); =# explain analyze select title,score from nodes where lower(title) like lower('s%') order by score desc limit 10; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=18122.41..18122.43 rows=10 width=25) (actual time=1324.703..1324.708 rows=10 loops=1) -> Sort (cost=18122.41..18144.60 rows=8876 width=25) (actual time=1324.700..1324.702 rows=10 loops=1) Sort Key: score Sort Method: top-N heapsort Memory: 17kB -> Bitmap Heap Scan on nodes (cost=243.53..17930.60 rows=8876 width=25) (actual time=96.124..1227.203 rows=161264 loops=1) Filter: (lower((title)::text) ~~ 's%'::text) -> Bitmap Index Scan on title_idx (cost=0.00..241.31 rows=8876 width=0) (actual time=90.059..90.059 rows=161264 loops=1) Index Cond: ((lower((title)::text) ~>=~ 's'::text) AND (lower((title)::text) ~<~ 't'::text)) Total runtime: 1325.085 ms (9 rows) So this gave me a speedup of factor 4. But can this be further improved? What if I want to use '%s%' instead of 's%'? Do I have any chance of getting a decent performance with PostgreSQL in that case, too? Or should I better try a different solution (Lucene?, Sphinx?) for implementing my autocomplete feature?

    Read the article

  • Entering and retrieving data from SQLite for an android List View

    - by Infiniti Fizz
    Hi all, I started learning android development a few weeks ago and have gone through the developer.android.com tutorials etc. But now I have a problem. I'm trying to create an app which tracks the usage of each installed app. Therefore I'm pulling the names of all installed apps using the PackageManager and then trying to put them into an SQLite database table. I am using the Notepad Tutorial SQLite implementation but I'm running into some problems that I have tried for days to solve. I have 2 classes, the DBHelper class and the actual ListActivity class. For some reason the app force closes when I try and run my fillDatabase() function which gets all the app names from the PackageManager and tries to put them into the database: private void fillDatabase() { PackageManager manager = this.getPackageManager(); List<ApplicationInfo> appList = manager.getInstalledApplications(0); for(int i = 0; i < appList.size(); i++) { mDbHelper.addApp(manager.getApplicationLabel(appList.get(i)).toString(), 0); } } addApp() is a function defined in my AppsDbHelper class and looks as follows: public long createApp(String name, int usage) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_NAME, name); initialValues.put(KEY_USAGE, usage); return mDb.insert(DATABASE_TABLE, null, initialValues); } The database create is defined as follows: private static final String DATABASE_CREATE = "create table notes (_id integer primary key autoincrement, " + "title text not null, usage integer not null);"; I have commented out all statements that follow fillDatabase(); in the onCreate() method of the ListActivity and so know that it is definetely the problem but I don't know why. I am taking the appName and putting it into the KEY_NAME field of the row and putting 0 into the KEY_USAGE field of the row (because initially, my app will default the usage of each app to 0 (not used yet)). If my addApp() function doesn't take the usage and just puts KEY_NAME into the ContentValues and into the database, it seems to work fine, but I want a column for usage. Any ideas why it is not working? Have I overlooked something? Thanks for your time, InfinitiFizz

    Read the article

  • Any simple approaches for managing customer data change requests for global reference files?

    - by Kelly Duke
    For the first time, I am developing in an environment in which there is a central repository for a number of different industry standard reference data tables and many different customers who need to select records from these industry standard reference data tables to fill in foreign key information for their customer specific records. Because these industry standard reference files are utilized by all customers, I want to reserve Create/Update/Delete access to these records for global product administrators. However, I would like to implement a (semi-)automated interface by which specific customers could request record additions, deletions or modifications to any of the industry standard reference files that are shared among all customers. I know I need something like a "data change request" table specifying: user id, user request datetime, request type (insert, modify, delete), a user entered text explanation of the change request, the user request's current status (pending, declined, completed), admin resolution datetime, admin id, an admin entered text description of the resolution, etc. What I can't figure out is how to elegantly handle the fact that these data change requests could apply to dozens of different tables with differing table column definitions. I would like to give the customer users making these data change requests a convenient way to enter their proposed record additions/modifications directly into CRUD screens that look very much like the reference table CRUD screens they don't have write/delete permissions for (with an additional text explanation and perhaps request priority field). I would also like to give the global admins a tool that allows them to view all the outstanding data change requests for the users they oversee sorted by date requested or user/date requested. Upon selecting a data change request record off the list, the admin would be directed to another CRUD screen that would be populated with the fields the customer users requested for the new/modified industry standard reference table record along with customer's text explanation, the request status and the text resolution explanation field. At this point the admin could accept/edit/reject the requested change and if accepted the affected industry standard reference file would be automatically updated with the appropriate fields and the data change request record's status, text resolution explanation and resolution datetime would all also be appropriately updated. However, I want to keep the actual production reference tables as simple as possible and free from these extraneous and typically null customer change request fields. I'd also like the data change request file to aggregate all data change requests across all the reference tables yet somehow "point to" the specific reference table and primary key in question for modification & deletion requests or the specific reference table and associated customer user entered field values in question for record creation requests. Does anybody have any ideas of how to design something like this effectively? Is there a cleaner, simpler way I am missing? Thank you so much for reading.

    Read the article

  • How does MySQL's ORDER BY RAND() work?

    - by Eugene
    Hi, I've been doing some research and testing on how to do fast random selection in MySQL. In the process I've faced some unexpected results and now I am not fully sure I know how ORDER BY RAND() really works. I always thought that when you do ORDER BY RAND() on the table, MySQL adds a new column to the table which is filled with random values, then it sorts data by that column and then e.g. you take the above value which got there randomly. I've done lots of googling and testing and finally found that the query Jay offers in his blog is indeed the fastest solution: SELECT * FROM Table T JOIN (SELECT CEIL(MAX(ID)*RAND()) AS ID FROM Table) AS x ON T.ID >= x.ID LIMIT 1; While common ORDER BY RAND() takes 30-40 seconds on my test table, his query does the work in 0.1 seconds. He explains how this functions in the blog so I'll just skip this and finally move to the odd thing. My table is a common table with a PRIMARY KEY id and other non-indexed stuff like username, age, etc. Here's the thing I am struggling to explain SELECT * FROM table ORDER BY RAND() LIMIT 1; /*30-40 seconds*/ SELECT id FROM table ORDER BY RAND() LIMIT 1; /*0.25 seconds*/ SELECT id, username FROM table ORDER BY RAND() LIMIT 1; /*90 seconds*/ I was sort of expecting to see approximately the same time for all three queries since I am always sorting on a single column. But for some reason this didn't happen. Please let me know if you any ideas about this. I have a project where I need to do fast ORDER BY RAND() and personally I would prefer to use SELECT id FROM table ORDER BY RAND() LIMIT 1; SELECT * FROM table WHERE id=ID_FROM_PREVIOUS_QUERY LIMIT 1; which, yes, is slower than Jay's method, however it is smaller and easier to understand. My queries are rather big ones with several JOINs and with WHERE clause and while Jay's method still works, the query grows really big and complex because I need to use all the JOINs and WHERE in the JOINed (called x in his query) sub request. Thanks for your time!

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >