Search Results

Search found 41135 results on 1646 pages for 'non relational database'.

Page 658/1646 | < Previous Page | 654 655 656 657 658 659 660 661 662 663 664 665  | Next Page >

  • Error // Usage: rails new APP_PATH [options] // when running 'rails server'

    - by madphill
    Background info: I'm using GIT to get a repository of a project with Ruby files in it. The project lives in my SITES folder under home directory on my Mac. I have Ruby: 1.8.7 I have just upgraded Rails to: 3.0.3 All I am trying to accomplish is to be able to render localhost.com:3000 in my browser of the GIT project i've already downloaded so I can work on it locally. I ran the command 'rails server' and was returned the message below:: Usage: rails new APP_PATH [options] Options: [--skip-gemfile] # Don't create a Gemfile -m, [--template=TEMPLATE] # Path to an application template (can be a filesystem path or URL) -d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db) # Default: sqlite3 -O, [--skip-active-record] # Skip Active Record files -J, [--skip-prototype] # Skip Prototype files -T, [--skip-test-unit] # Skip Test::Unit files [--dev] # Setup the application with Gemfile pointing to your Rails checkout -r, [--ruby=PATH] # Path to the Ruby binary of your choice # Default: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -G, [--skip-git] # Skip Git ignores and keeps -b, [--builder=BUILDER] # Path to an application builder (can be a filesystem path or URL) [--edge] # Setup the application with Gemfile pointing to Rails repository Runtime options: -f, [--force] # Overwrite files that already exist -s, [--skip] # Skip files that already exist -p, [--pretend] # Run but do not make any changes -q, [--quiet] # Supress status output Rails options: -h, [--help] # Show this help message and quit -v, [--version] # Show Rails version number and quit Description: The 'rails new' command creates a new Rails application with a default directory structure and configuration at the path you specify. Example: rails new ~/Code/Ruby/weblog This generates a skeletal Rails installation in ~/Code/Ruby/weblog. See the README in the newly created application to get going.

    Read the article

  • Can I output/flush data to screen while processing ajax page?

    - by Bee
    I need to display on my page a list of records pulled from a table. Ajax works fine (I query the database and put all the data inside a on the main page) but if I have lots of records (say 500+) it will hang until data is fully loaded, THEN it will be sent back to the page and correctly displayed. I would like to be able to display the records on the page while getting them, instead of being forced to wait until completion. I am trying with flush(); inside the remote (ajax) page but it still waits until full data is loaded. This is what I currently have inside the ajax page: At the very beginning: @apache_setenv('no-gzip', 1); @ini_set('zlib.output_compression', 0); @ini_set('implicit_flush', 1); for ($i = 0; $i < ob_get_level(); $i++) { ob_end_flush(); } ob_implicit_flush(1); Then whenever I have a echo call: ob_flush(); Now if I load the ajax page alone... it will list the records while reading them from the database. But if I call the same page via Ajax, it will hang and send all the data at once. Any idea? This is the function I use to get the ajax content ('id' is the target , 'url' refers to the ajax page that runs the database query to list the records): function ajax(id,url) { xmlhttp=new XMLHttpRequest(); xmlhttp.open("GET",url,false); xmlhttp.send(null); document.getElementById(id).innerHTML = parseScript(xmlhttp.responseText); }

    Read the article

  • Free tools/libraries to compare tables with filtering for SQL Server 2008 and visualize/sync diffs t

    - by MicMit
    I am building certain GUI in C# for a content manager and looking for the tools or code snippets or open libraries ( code ideally in C# ) which allow me the following : 1. For table A in database X (test ) and table A in database Y (production) and for a simple filter ( e.g. listname = "XYZ" ) I need to show additions/deletions/updates in some way. which might be side-by-side or just html report 2 record added html table with some fields 2 record deleted html table with some fields Considering that this task is very common, I guess, certain components should exist ? Components either return some collections from parameters given for further visualizing or just produce reports mentioned above. 2. I need to push changes for the filter I mentioned in 1 and update table in production database for this filter only ( ie for the particular list approved by content person). Again probably there are certain SQL code generators - components in addition to diffs or standalone. 3. The key thing tools/libraries - should be suitable for integration with the existing application in C#.

    Read the article

  • Script to copy files on CD and not on hard disk to a new directory

    - by John22
    I need to copy files from a set of CDs that have a lot of duplicate content, with each other, and with what's already on my hard disk. The file names of identical files are not the same, and are in sub-directories of different names. I want to copy non-duplicate files from the CD into a new directory on the hard disk. I don't care about the sub-directories - I will sort it out later - I just want the unique files. I can't find software to do that - see my post at SuperUser http://superuser.com/questions/129944/software-to-copy-non-duplicate-files-from-cd-dvd Someone at SuperUser suggested I write a script using GNU's "find" and the Win32 version of some checksum tools. I glanced at that, and have not done anything like that before. I'm hoping something exists that I can modify. I found a good program to delete duplicates, Duplicate Cleaner (it compares checksums), but it won't help me here, as I'd have to copy all the CDs to disk, and each is probably about 80% duplicates, and I don't have room to do that - I'd have to cycle through a few at a time copying everything, then turning around and deleting 80% of it, working the hard drive a lot. Thanks for any help.

    Read the article

  • PHP syntax for referencing self with late static binding

    - by Chris
    When I learned PHP it was pretty much in procedural form, more recently I've been trying to adapt to the OOP way of doing things. Hoever the tutorials I've been following were produced before PHP 5.3 when late static binding was introduced. What I want to know is how do I reference self when calling a function from a parent class. For example these two methods were written for a User class which is a child of DatabaseObject. Right now they're sitting inside the User class, but, as they're used in other child classes of DatabaseObject I'd like to promote them to be included inside DatabaseObject. public static function find_all() { global $database; $result_set = self::find_by_sql("select * from ".self::$table_name); return $result_set; } and: protected function cleaned_attributes() { global $database; $clean_attributes = array(); foreach($this->attributes() as $key => $value) { $clean_attributes[$key] = $database->escape_value($value); } return $clean_attributes; } So I have three questions: 1) How do I change the self:: reference when I move it to the parent. Is it static:: or something similar? 2) When calling the function from my code do I call it in the same way, as a function of the child class eg User::find_all() or is there a change there also? 3) Is there anything else I need to know before I start chopping bits up?

    Read the article

  • django powering multiple shops from one code base on a single domain

    - by imanc
    Hey, I am new to django and python and am trying to figure out how to modify an existing app to run multiple shops through a single domain. Django's sites middleware seems inappropriate in this particular case because it manages different domains, not sites run through the same domain, e.g. : domain.com/uk domain.com/us domain.com/es etc. Each site will need translated content - and minor template changes. The solution needs to be flexible enough to allow for easy modification of templates. The forms will also need to vary a bit, e.g minor variances in fields and validation for each country specific shop. I am thinking along the lines of the following as a solution and would love some feedback from experienced django-ers: In short: same codebase, but separate country specific urls files, separate templates and separate database Create a middleware class that does IP localisation, determines the country based on the URL and creates a database connection, e.g. /au/ will point to the au specific database and so on. in root urls.py have routes that point to a separate country specific routing file, e..g (r'^au/',include('urls_au')), (r'^es/',include('urls_es')), use a single template directory but in that directory have a localised directory structure, e.g. /base.html and /uk/base.html and write a custom template loader that looks for local templates first. (or have a separate directory for each shop and set the template directory path in middleware) use the django internationalisation to manage translation strings throughout slight variances in forms and models (e.g. ZA has an ID field, France has 'door code' and 'floor' etc.) I am unsure how to handle these variations but I suspect the tables will contain all fields but allowing nulls and the model will have all fields but allowing nulls. The forms will to be modified slightly for each shop. Anyway, I am keen to get feedback on the best way to go about achieving this multi site solution. It seems like it would work, but feels a bit "hackish" and I wonder if there's a more elegant way of getting this solution to work. Thanks, imanc

    Read the article

  • Can I use the auto-generated Linq-to-SQL entity classes in 'disconnected' mode?

    - by Gary McGill
    Suppose I have an automatically-generated Employee class based on the Employees table in my database. Now suppose that I want to pass employee data to a ShowAges method that will print out name & age for a list of employees. I'll retrieve the data for a given set of employees via a linq query, which will return me a set of Employee instances. I can then pass the Employee instances to the ShowAges method, which can access the Name & Age fields to get the data it needs. However, because my Employees table has relationships with various other tables in my database, my Employee class also has a Department field, a Manager field, etc. that provide access to related records in those other tables. If the ShowAges method were to invoke any of those methods, this would cause lots more data to be fetched from the database, on-demand. I want to be sure that the ShowAges method only uses the data I have already fetched for it, but I really don't want to have to go to the trouble of defining a new class which replicates the Employee class but has fewer methods. (In my real-world scenario, the class would have to be considerably more complex than the Employee class described here; it would have several 'joined' classes that do need to be populated, and others that don't). Is there a way to 'switch off' or 'disconnect' the Employees instances so that an attempt to access any property or related object that's not already populated will raise an exception? If not, then I assume that since this must be a common requirement, there might be an already-established pattern for doing this sort of thing?

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • How to convert none-Latin-based encoded text into UTF-8, or make them coexist on same page?

    - by Yallaa
    Good day, I have a script that scrapes the title/description of remote pages and prints those values into a corresponding charset=UTF-8 encoded page. Here is the problem, whenever a remote page is encoded with non-Latin characters encoding like (Arabic, Russian, Chinese, Japanese etc.) the imported values print as garbled text. I've tried passing those values through either iconv or mb_convert_encoding converters but without much success. Then, I tried detecting the remote encoding first, then change my presentation page's encoding into the remote one instead of the current utf-8, which works okay with the imported values, but the other existing utf-8 content of that language on the page gets garbled instead. Example: If I try to import those values from a Russian windows-1251 into my UTF-8 encoded page which has a mix English/Russian content. I change the imported non-utf-8 string into a utf-8 using either iconv or mb_convert_encoding. I tried: $RemoteValue = iconv($RemoteEncoding, 'UTF-8', $RemoteValue); or $RemoteValue mb_convert_encoding($RemoteValue, "UTF-8", $RemoteEncoding); or $RemoteValue mb_convert_encoding($RemoteValue, "UTF-8", "auto"); without success. If I detect that the remote page is windows-1251 encoded and I change my presentation page (which already has UTF-8 encoded mixed language content) to be similar to the remote page, then the japanese utf-8 content on the existing page gets garbled... Can 2 differently encoded strings coexist on the same page (ex. utf-8 & windows-1251)? Am I using the converters correctly? any hints as to why they don't work? Is there any better way to do this? Thank you for your help

    Read the article

  • how can I speed up insertion of many rows to a table via ADO.NET?

    - by jcollum
    I have a table that has 5 columns: AcctId (int), Address1 (varchar), Address2 (varchar), Person1 (varchar), Person2 (varchar) . I'm generating random data to insert into this table via a C# console application. I've tried doing this random data insert via SQL-Server and decided it was not a good solution -- SQL is not good at random on an each-row basis. Generating the random data -- 975k rows of it -- takes a minimal amount of time. It's in a List of custom objects. I need to take this random data and update many rows in the database with the new random data. I tried updating the rows one at a time, very slow because of the repeated searching of the List object in code. So I think the best approach is to put all the randomized data into a table in the database, then update all the other tables that use this data. I.e. UPDATE t SET t.Address1=d.Address1 FROM Table1 t INNER JOIN RandomizedData d ON d.AcctId = t.Acct_ID. The database is very un-normalized so this Acct data is sprinkled all over the place. I've got no control of the normalization. So, having decided to insert all of the randomized data into a single table, I set out to create insert scripts: USE TheDatabase Insert tmp_RandomizedData SELECT 1,'4392 EIGHTH AVE','','JENNIFER CARTER','BARBARA CARTER' UNION ALL SELECT 2,'2168 MAIN ST','HNGR F','DANIEL HERNANDEZ','SUSAN MARTIN' // etc another 98 times... // FYI, this is not real data! I'm building this INSERT script in batches of 100. It's taking on average 175 ms to run each insert. Does this seem like a long time? It's going to take about 35 mins to run the whole insert. The table doesn't have a primary key or any indexes. I was planning on adding those after all the data in inserted (thinking that that would be faster). Is there a better way to do this?

    Read the article

  • Converting FoxPro Date type to SQL Server 2005 DateTime using SSIS

    - by Avrom
    Hi, When using SSIS in SQL Server 2005 to convert a FoxPro database to a SQL Server database, if the given FoxPro database has a date type, SSIS assumes it is an integer type. The only way to convert it to a dateTime type is to manually select this type. However, that is not practical to do for over 100 tables. Thus, I have been using a workaround in which I use DTS on SQL Server 2000 which converts it to a smallDateTime, then make a backup, then a restore into SQL Server 2005. This workaround is starting to be a little annoying. So, my question is: Is there anyway to setup SSIS so that whenever it encounters a date type to automatically assume it should be converted to a dateTime in SQL Server and apply that rule across the board? Update To be specific, if I use the import/export wizard in SSIS, I get the following error: Column information for the source and the destination data could not be retrieved, or the data types of source columns were not mapped correctly to those available on the destination provider. Followed by a list of a given table's date columns. If I manually set each one to a dateTime, it imports fine. But I do not wish to do this for a hundred tables.

    Read the article

  • Zend Sessions problem with IE8

    - by Emil
    I'm running a Zend Framework powered website and it seems to have serious problems with sessions. I have a 5 step process where I save the form data in the session between the steps and then save it into the database on the last step. When we built the site sometimes the session just went away and forced us to restart. Now it seems to work again but recently we discovered an issue with Internet Explorer 8. It fails between step 2 - 3 and forgets the session. It works fine in IE6, IE7, FF, Chrome, Safari and even in my mobile web browser (SE P1). We're storing our sessions in the database and if I deactivate the session db handler it works. What's the difference between using the database and not using it for sessions? Do I loose something if I switch back? Bootstrap: /* Start session */ $saveHandler = new Zend_Session_SaveHandler_DbTable(array( 'name' => 'sessions', 'primary' => 'id', 'modifiedColumn' => 'modified', 'dataColumn' => 'data', 'lifetimeColumn' => 'lifetime' )); Zend_Session::rememberMe((int) $config->session->lifetime); $saveHandler->setLifetime((int) $config->session->lifetime) ->setOverrideLifetime(true); Zend_Session::setSaveHandler($saveHandler); Zend_Session::start(); and in my step controller $session = new Zend_Session_Namespace('wizard'); Then I'm just working with $session saving data in a stdClass in $session.

    Read the article

  • nhibernate - mapping with contraints

    - by Tobias Müller
    Hello everybody, I am having a Problem with my nhibernate-mapping and I can't find a solution by searching on stackoverflow/google/documentation. The database I am using has (amongst others) two tables. One is unit with the following fields: id enduring_id starts ends damage_enduring_id [...] The other one is damage, which has the following fields: id enduring_id starts ends [...] The units are assigned to a damage and one damage can have zero, one or more units working on it. Every time a unit moves to annother damage, the dataset is copied. The field "ends" of the old record and "starts" of the new record are set to the current time stamp, enduring_id stays the same. So if I want to know which units were working on a damage at a certain time, I do the following select: select * from unit join damage on damage.enduring_id = unit.damage_enduring_id where unit.starts <= 'time' and unit.ends = 'time' (This is not an actualy query from the database, I made it up to make clear what I mean. The the real database is a little more complex) Now I want to map it that way, so I can load all the damages which are valid at one time (starts <= wanted time <= ends) and that each of them has a Bag with all the attached units at that time (again starts <= wanted time <= ends). Is this possible within the mapping? Sorry if this is a stupid question, but I am pretty new to nhibernate and I have no clue how to do it. Thanks a lot for reading my post! Bye, Tobias

    Read the article

  • Updating nullability of columns in SQL 2008

    - by Shaul
    I have a very wide table, containing lots and lots of bit fields. These bit fields were originally set up as nullable. Now we've just made a decision that it doesn't make sense to have them nullable; the value is either Yes or No, default No. In other words, the schema should change from: create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit null, BitField2 bit null, ... BitFieldN bit null ) to create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit not null, BitField2 bit not null, ... BitFieldN bit not null ) alter table MyTable add constraint DF_BitField1 default 0 for BitField1 alter table MyTable add constraint DF_BitField2 default 0 for BitField2 alter table MyTable add constraint DF_BitField3 default 0 for BitField3 So I've just gone in through the SQL Management Studio, updating all these fields to non-nullable, default value 0. And guess what - when I try to update it, SQL Mgmt studio internally recreates the table and then tries to reinsert all the data into the new table... including the null values! Which of course generates an error, because it's explicitly trying to insert a null value into a non-nullable column. Aaargh! Obviously I could run N update statements of the form: update MyTable set BitField1 = 0 where BitField1 is null update MyTable set BitField2 = 0 where BitField2 is null but as I said before, there are n fields out there, and what's more, this change has to propagate out to several identical databases. Very painful to implement manually. Is there any way to make the table modification just ignore the null values and allow the default rule to kick in when you attempt to insert a null value?

    Read the article

  • Using Linq2Sql to insert data into multiple tables using an auto incremented primary key

    - by Thomas
    I have a Customer table with a Primary key (int auto increment) and an Address table with a foreign key to the Customer table. I am trying to insert both rows into the database in one nice transaction. using (DatabaseDataContext db = new DatabaseDataContext()) { Customer newCustomer = new Customer() { Email = customer.Email }; Address b = new Address() { CustomerID = newCustomer.CustomerID, Address1 = billingAddress.Address1 }; db.Customers.InsertOnSubmit(newCustomer); db.Addresses.InsertOnSubmit(b); db.SubmitChanges(); } When I run this I was hoping that the Customer and Address table automatically had the correct keys in the database since the context knows this is an auto incremented key and will do two inserts with the right key in both tables. The only way I can get this to work would be to do SubmitChanges() on the Customer object first then create the address and do SubmitChanges() on that as well. This would create two roundtrips to the database and I would like to see if I can do this in one transaction. Is it possible? Thanks

    Read the article

  • Self-describing file format for gigapixel images?

    - by Adam Goode
    In medical imaging, there appears to be two ways of storing huge gigapixel images: Use lots of JPEG images (either packed into files or individually) and cook up some bizarre index format to describe what goes where. Tack on some metadata in some other format. Use TIFF's tile and multi-image support to cleanly store the images as a single file, and provide downsampled versions for zooming speed. Then abuse various TIFF tags to store metadata in non-standard ways. Also, store tiles with overlapping boundaries that must be individually translated later. In both cases, the reader must understand the format well enough to understand how to draw things and read the metadata. Is there a better way to store these images? Is TIFF (or BigTIFF) still the right format for this? Does XMP solve the problem of metadata? The main issues are: Storing images in a way that allows for rapid random access (tiling) Storing downsampled images for rapid zooming (pyramid) Handling cases where tiles are overlapping or sparse (scanners often work by moving a camera over a slide in 2D and capturing only where there is something to image) Storing important metadata, including associated images like a slide's label and thumbnail Support for lossy storage What kind of (hopefully non-proprietary) formats do people use to store large aerial photographs or maps? These images have similar properties.

    Read the article

  • Storing Arbitrary Contact Information in Ruby on Rails

    - by Anthony Chivetta
    Hi, I am currently working on a Ruby on Rails app which will function in some ways like a site-specific social networking site. As part of this, each user on the site will have a profile where they can fill in their contact information (phone numbers, addresses, email addresses, employer, etc.). A simple solution to modeling this would be to have a database column per piece of information I allow users to enter. However, this seems arbitrary and limited. Further, to support allowing users to enter as many phone numbers as they would like requires the addition of another database table and joins. It seems to me that a better solution would be to serialize all the contact information entered by a user into a single field in their row. Since I will never be conditioning a SQL query on this information, such a solution wouldn't be any less efficient. Ideally, I would like to use a vCard as my serialization format. vCards are the standard solution to storing contact information across the web, and reusing tested solutions is a Good Thing. Alternative serialization formats would include simply marshaling a ruby hash, or YAML. Regardless of serialization format, supporting the reading and updating of this information in a rails-like way seems to be a major implementation challenge. So, here's the question: Has anyone seen this approach used in a rails application? Are there any rails plugins or gems that make such a system easy to implement? Ideally what I would like is an acts_as_vcard to add to my model object that would handle editing the vcard for me and saving it back to the database.

    Read the article

  • Verizon SongID - How is it programmed?

    - by CheeseConQueso
    For anyone not familiar with Verizon's SongID program, it is a free application downloadable through Verizon's VCast network. It listens to a song for 10 seconds at any point during the song and then sends this data to some all-knowing algorithmic beast that chews it up and sends you back all the ID3 tags (artist, album, song, etc...) The first two parts and last part are straightforward, but what goes on during the processing after the recorded sound is sent? I figure it must take the sound file (what format?), parse it (how? with what?) for some key identifiers (what are these? regular attributes of wave functions? phase/shift/amplitude/etc), and check it against a database. Everything I find online about how this works is something generic like what I typed above. From audiotag.info This service is based on a sophisticated audio recognition algorithm combining advanced audio fingerprinting technology and a large songs' database. When you upload an audio file, it is being analyzed by an audio engine. During the analysis its audio “fingerprint” is extracted and identified by comparing it to the music database. At the completion of this recognition process, information about songs with their matching probabilities are displayed on screen.

    Read the article

  • How to return proper 404 for google while providing user friendly content to the user?

    - by Marek
    I am bouncing between posting this here and on Superuser. Please excuse me if you feel this does not belong here. I am observing the behavior described here - Googlebot is requesting random urls on my site, like aecgeqfx.html or sutwjemebk.html. I am sure that I am not linking these urls from anywhere on my site. I suspect this may be google probing how we handle non existent content - to cite from an answer to the linked question: [google is requesting random urls to] see if your site correctly handles non-existent files (by returning a 404 response header) We have a custom page for nonexistent content - a styled page saying "Content not found, if you believe you got here by error, please contact us", with a few internal links, served (naturally) with a 200 OK. The URL is served directly (no redirection to a single url). I am afraid this may discriminate the site at google - they may not interpret the user friendly page as a 404 - not found and may think we are trying to fake something and provide duplicate content. How should I proceed to ensure that google will not think the site is bogus while providing user friendly message to users in case they click on dead links by accident?

    Read the article

  • How do I exclude data from local table schema_migrations from being pushed to Heroku DB?

    - by Thierry Lam
    I was able to push my Ruby on Rails app with MySQL(local dev) to the Heroku server along with migrating my model with the command heroku rake db:migrate. I have also read the documentation on Database Import/Export. Is that doc referring to pushing actual data from my local dev DB to whichever Heroku's DB? Do I need to modify anything in the file database.yml to make it happen? I ran the following command: heroku db:push and I am getting the error: Sending data 2 tables, 3 records !!! Caught Server Exception | ETA: --:--:-- Taps Server Error: PGError ERROR: duplicate key value violates unique constraint "unique_schema_migrations" I have 2 tables, one I create for my app and the other schema_migrations. The total number of entries among the 2 tables is 3. I'm also printing the number of entries I have in the table I have created and it's showing 0. Any ideas what I might be missing or what I am doing wrong? EDIT: I figured out the above, Heroku's DB already have schema_migrations the moment I ran migrate. New question: Does anyone know how I can exclude data from a specific table from being pushed to Heroku DB. The table to exclude in this case will be schema_migrations. Not so good solution: I googled around and someone else was having the same issue. He suggested naming the schema_migrations table to zschema_migrations. In this way data from the other tables will be pushed properly until it fails on the last table. It's a pretty bad solution but will do for the time being. A better solution will be to use an existing Rails command which can reset a specific table from a database. I don't think Rake can do that.

    Read the article

  • Oracle syntax - should we have to choose between the old and the new?

    - by Martin Milan
    Hi, I work on a code base in the region of about 1'000'000 lines of source, in a team of around eight developers. Our code is basically an application using an Oracle database, but the code has evolved over time (we have plenty of source code from the mid nineties in there!). A dispute has arisen amongst the team over the syntax that we are using for querying the Oracle database. At the moment, the overwhelming majority of our queries use the "old" Oracle Syntax for joins, meaning we have code that looks like this... Example of Inner Join select customers.*, orders.date, orders.value from customers, orders where customers.custid = orders.custid Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers, contacts where customers.custid = contacts.custid(+) As new developers have joined the team, we have noticed that some of them seem to prefer using SQL-92 queries, like this: Example of Inner Join select customers.*, orders.date, orders.value from customers inner join orders on (customers.custid = orders.custid) Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers left join contacts on (customers.custid = contacts.custid) Group A say that everyone should be using the the "old" syntax - we have lots of code in this format, and we ought to value consistency. We don't have time to go all the way through the code now rewriting database queries, and it wouldn't pay us if we had. They also point out that "this is the way we've always done it, and we're comfortable with it..." Group B however say that they agree that we don't have the time to go back and change existing queries, we really ought to be adopting the "new" syntax on code that we write from here on in. They say that developers only really look at a single query at a time, and that so long as developers know both syntax there is nothing to be gained from rigidly sticking to the old syntax, which might be deprecated at some point in the future. Without declaring with which group my loyalties lie, I am interested in hearing the opinions of impartial observers - so let the games commence! Martin. Ps. I've made this a community wiki so as not to be seen as just blatantly chasing after question points...

    Read the article

  • Merging datasets with 2 different time variables in SAS

    - by John
    Hye Guys, for those regularly browsing this site sorry for already another question (however I did solve my last question myself!) I have another problem with merging datasets, it seems that accounting for time in datasets is a real pain in the ass. I succesfully managed to merge on months in my previous datasets, however it seems I have a final dataset which only has quarter as a time count variable. So where all my normal databases have month 1- xxx as an indicator of time, this database had quarter as an indicator of time. I still want to add the variables of this last database, let's call it TVOL, into my WORK database. Quick summary QUARTER: Quarter 0 = JAN1996-MAR1996 Month: Month 0 = JAN1996 Example: TVOL TVOL _ Ticker ____ Quarter 1500 _ AA ________ -1 52546 _ BB ________ 15 Example: WORK BETA _ Ticker ____ Month 1.52 _ AA ________ 2 1.54__ BB _______ 3 Example: Merged: BETA ______ TVOL __ Ticker ____ Month 1.52 _______ 500 ___ AA _______ 2 I now want to merge this 2 tables using following relationship if the month is in quarter 1, the data of quarter 0 has to be used, so if i have an observation i nWORK with date 2FEB1996 the TVOL of quarter -1 has to be put behind this observation. Something like IF month = quarter i use data quarter i-1. Also, as TVOL is measured quarterly and I have to put in monthly I have to take the average, so (TVOL/3) should be added as a variable. Thanks!

    Read the article

  • LINQ to XML contents of child records.

    - by Fossaw
    I have this LINQ to XML enquiry... var Records = from Item in XDoc.Root.Elements("Item") where (string)Item.Element("ItemNumber") == item.ID.ToString select Item; ... where ItemNumber is a reference number used in the XML, (originally written by this program but manually edited by "others"), and item.ID is the database version of the same thing. The query executes, and I can test for the number of entries in the result fine... if (Records.Count() < 1) ... you get the idea. I have established that there is only one record. Each Item has several child fields. I want to test the values of the child fields are reasonable before passing them on to the database update sub-system. The XML is produced by the program, but edited by users, so I need to really check what is coming back. So I tried... if (DB_English.ToString() != Records.Elements("English").ToString()) ... DB_English is from the database, but the XML in Records, does not contain the contents of that field, it contains... System.Xml.Linq.Extensions+<GetElements>d__29`1[System.Xml.Linq.XElement] ... so, how do I get the value of this element in the XML file? I need to check the field in the XML has not been altered, (the manual editors of this data file are not potentially 100% reliable).

    Read the article

  • random data using php & mysql

    - by Prakash
    I have mysql database structure like below: CREATE TABLE test ( id int(11) NOT NULL auto_increment, title text NULL, tags text NULL, PRIMARY KEY (id) ); data on field tags is stored as a comma separated text like html,php,mysql,website,html etc... now I need create an array that contains around 50 randomly selected tags from random records. currently I am using rand() to select 15 random mysql data from database and then holding all the tags from 15 records in an array. Then I am using array_rand() for randomizing the array and selecting only 50 random records. $query=mysql_query("select * from test order by id asc, RAND() limit 15"); $tags=""; while ($eachData=mysql_fetch_array($query)) { $additionalTags=$eachData['tags']; if ($tags=="") { $tags.=$additionalTags; } else { $tags.=$tags.",".$additionalTags; } } $tags=explode(",", $tags); $newTags=array(); foreach ($tags as $tag) { $tag=trim($tag); if ($tag!="") { if (!in_array($tag, $newTags)) { $newTags[]=$tag; } } } $random_newTags=array_rand($newTags, 50); Now I have huge records on the database, and because of that; rand() is performing very slow and sometimes it doesn't work. So can anyone let me know how to handle this situation correctly so that my page will work normally.

    Read the article

  • Creating circular generic references

    - by M. Jessup
    I am writing an application to do some distributed calculations in a peer to peer network. In defining the network I have two class the P2PNetwork and P2PClient. I want these to be generic and so have the definitions of: P2PNetwork<T extends P2PClient<? extends P2PNetwork<T>>> P2PClient<T extends P2PNetwork<? extends T>> with P2PClient defining a method of setNetwork(T network). What I am hoping to describe with this code is: A P2PNetwork is constituted of clients of a certain type A P2PClient may only belong to a network whose clients consist of the same type as this client (the circular-reference) This seems correct to me but if I try to create a non-generic version such as MyP2PClient<MyP2PNetwork<? extends MyP2PClient>> myClient; and other variants I receive numerous errors from the compiler. So my questions are as follows: Is a generic circular reference even possible (I have never seen anything explicitly forbidding it)? Is the above generic definition a correct definition of such a circular relationship? If it is valid, is it the "correct" way to describe such a relationship (i.e. is there another valid definition, which is stylistically preferred)? How would I properly define a non-generic instance of a Client and Server given the proper generic definition?

    Read the article

< Previous Page | 654 655 656 657 658 659 660 661 662 663 664 665  | Next Page >