Search Results

Search found 30279 results on 1212 pages for 'database drift'.

Page 588/1212 | < Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >

  • SQL Server 2008 - Conditional Query

    - by Villager
    Hello, SQL is not one of my strong suits. I have a SQL Server 2008 database. This database has a stored procedure that takes in eight int parameters. For the sake of keeping this question focused, I will use one of these parameters for reference: @isActive int Each of these int parameters will be -1, 0, or 1. -1 means "Unknown" or "Don't Care". Basically, I need to query a table such that if the int parameter is NOT -1, I need to consider it in my WHERE clause. Because there are eight int parameters, an IF-ELSE statement does not seem like a good idea. At the same time, I do not know how else to do this? Is there an elegant way in SQL to add a WHERE conditional if a parameter does NOT equal a value? Thank you!

    Read the article

  • Unique identification string in php

    - by NardCake
    Currently me and my friend are developing a website, for what we will call 'projects' we just have a basic auto increment id in the database used to navigate to projects such as oururl.com/viewproject?id=1 but we started thinking, if we have alot of posted projects thats going to be a LONG url. So we need to somehow randomly generate a alphanumerical string about 6 characters long. We want the chance of the string being duplicated being extremely low and of course we will query the database before assigning an identifier. Thanks for anyhelp, means alot!

    Read the article

  • Batch To Bash Conversion

    - by Steven
    I need to know this Batch Script into Bash : @echo off set /p name= Name? findstr /m "%name%" ndatabase.txt if %errorlevel%==0 ( cls echo The name is found in the database! pause >nul exit ) cls echo. echo Name not found in database. pause >nul exit I am new to the Linux Kernel, so starting off with an easy distro - Ubuntu 12.10. My problem is that I do not really know much of Bash Script, since I am very accustomed to the Batch Script format; which is obviously a bad habit for my C++.

    Read the article

  • php query cant figure out the problem

    - by Eat
    I dont get an error but it does not return me the row values as well. How can I address the problem? <?php //DATABASE $dbConn = mysql_connect($host,$username,$password); mysql_select_db($database,$dbConn); $SQL = mysql_query("SELECT N.*, C.CatID FROM News N INNER JOIN Categories C ON N.CatID = C.CatID WHERE N.Active = 1 ORDER BY DateEntered DESC"); while ( $Result = mysql_fetch_array($SQL) or die(mysql_error())) { $CatID[] = $Result[CatID]; $NewsName[] = $Result[NewsName]; $NewsShortDesc[] = $Result[NewsShortDesc]; } // mysql_free_result($Result); ?> <div class="toparticle"> <span class="section"><?=$CatID[0] ?> </span> <span class="headline"><?=$NewsName[0] ?></span> <p><?=$NewsShortDesc[0] ?></p> </div>

    Read the article

  • SQL To Filter A SubSet Of Data

    - by Nick LaMarca
    I have an arraylist that holds a subset of names found in my database. I need to write a query to get a count of the people in the arraylist for certain sections i.e. There is a field "City" in my database from the people in the arraylist of names I want to know how many of them live in Chicago, how many live in New York etc. Can someone help me how I might set up an sql statement to handle this. I think somehow I have to pass the subset of names to sql somehow.

    Read the article

  • iPhone SDK: Data Synchronization

    - by buzzappsoftware
    I am looking for an overview of data synchronization techniques available on the iPhone platform. We need the ability to be able to sync a subset of content from a server to a local database residing on the iPhone. On other projects I have worked on, the data synchronization was handled by the database. Is that available in SQLite? If not, any suggestions on techniques? Rolling our own would not be my first choice. Thanks in advance.

    Read the article

  • How do manage the limit of executions to be done by hour (Max: 1000 requests per hour) without a dat

    - by cslavoie
    I am currently developing a script in PHP to fetch webpages. The fact is that by doing so, I occasionally do too much requests to a particular website. In order to control any overflow, I would like to keep trace of how many requests have been done in the last hour or so for each domain. It doesn't need to be perfect, just a good estimate. I doesn't have access to a database, except sqlite2. I would really like something really simple because there will typically be a lot of updates, which is kind of heavy for a sqlite database. If no one has a magical solution, I'll go for sqlite, but I was curious what you can come up with Thank you very much

    Read the article

  • Accessing TextView from another class

    - by Jenny
    I've got my main startup class loading main.xml but I'm trying to figure out how to access the TextView from another class which is loading information from a database. I would like to publish that information to the TextView. So far I've not found any helpful examples on Google. EDIT: This is my class that is doing the Database work: import android.widget.TextView;import android.view.View; public class DBWork{ private View view; ... TextView tv = (TextView) view.findViewById(R.id.TextView01); tv.setText("TEXT ME") Yet, everytime I do that I get a nullpointerexception

    Read the article

  • Run a site on Scheme

    - by Lajla
    I can't find this on Google (so maybe it doesn't exist), but I basically'd like to install something on a web server such that I can run a site on Scheme, PHP is starting to annoy me, I want to get rid off it, what I want is: Run Scheme sources towards UTF-8 output (duh) Support for SXML, SXLT et cetera, I plan to compose the damned thing in SXML and - to normal representation on at the end. Ability to read other files from the server, write them, set permissions et cetera Also some things to for instance determine the filesize of files, height of images, mime-types and all that mumbo-jumbo (optionally) connect to a database, but for what I want to do storing the entire database in S-expressions itself is feasible enough I don't need any fancy libraries and other things that come with it like CMS'es and what-not, except the support for SXML but I'm sure I can just find a lib for that anyway that I can load.

    Read the article

  • Possible to not use ID field but another column name? in Lift

    - by bstevens90
    I am connected to a oracle database from a scala/lift webapp. I have been able to successfully pull information from the database as I wished but am having one issue. For each table I want to access I am required to add an ID field so that the app will work with the trait IdPK. What mapper class or trait can I use to override this? I have been trying to find one but been unable to locate it. Figured people have not always had an ID field on every table they make that is just called ID... class DN_REC extends LongKeyedMapper[DN_REC] with IdPK { def getSingleton = DN_REC object dn_rec_id extends MappedInt(this){ } This is what I am talking about. I would like to use the dn_rec_id as my primary key as it is on the table. Thanks

    Read the article

  • How to store matrix information in MySQL?

    - by dedalo
    Hi, I'm working on an application that analizes music similarity. In order to do that I proccess audio data and store the results in txt files. For each audio file I create 2 files, 1 containing and 16 values (each value can be like this:2.7000023942731723) and the other file contains 16 rows, each row containing 16 values like the one previously shown. I'd like to store the contents of these 2 file in a table of my MySQL database. My table looks like: Name varchar(100) Author varchar (100) in order to add the content of those 2 file I think I need to use the BLOB data type: file1 blob file2 blob My question is how should I store this info in the data base? I'm working with Java where I have a double array containing the 16 values (for the file1) and a matrix containing the file2 info. Should I process the values as strings and add them to the columns in my database? Thanks

    Read the article

  • How check user online status in web site?

    - by Milan Sanda Sri
    how can i get know when user online and offline. when one user log in to my site. i set a script to change database table field of that user, as a boolean to indicates users online states. but my problem is if he/she leaved my site without clicking the log-out button, then my script does not work and database show he/she as a online user. please give me any sugestion to fix this. i have no idea what to do! i check some answers on this topic, but most of says asnwer lie this -- if last activity time is less than now+15 minutes then user is online, offline otherwise. but i have seen some social networking sites shows that we have gone offline, just we close the browser. how they do that ?

    Read the article

  • Open Source PHP search engine

    - by Ravi Gupta
    I am looking for an open source search engine plugin written in php for my website(eCommerce). Before anybody answer that I have a doubt regarding the search engine. Usually search engine crawl web pages, create indexes and then use them while looking for contents. But will the same model work for eCommerce websites? Yeah, it can crawl products pages, index them but don't you think it would be better if it crawls the database directly and index the products stored in the database? And when a user search for any product, it will simply give us the rows of the table which matches the user query? May be what I am asking is a stupid question but I am new to web development, so kindly help me to understand the concept. I have looked at a search engine called Sphider but didn't get what all I have to do to make it work with an eCommerce website.

    Read the article

  • How do I split the output from mysqldump into smaller files?

    - by lindelof
    I need to move entire tables from one MySQL database to another. I don't have full access to the second one, only phpMyAdmin access. I can only upload (compressed) sql files smaller than 2MB. But the compressed output from a mysqldump of the first database's tables is larger than 10MB. Is there a way to split the output from mysqldump into smaller files? I cannot use split(1) since I cannot cat(1) the files back on the remote server. Or is there another solution I have missed? Edit The --extended-insert=FALSE option to mysqldump suggested by the first poster yields a .sql file that can then be split into importable files, provided that split(1) is called with a suitable --lines option. By trial and error I found that bzip2 compresses the .sql files by a factor of 20, so I needed to figure out how many lines of sql code correspond roughly to 40MB.

    Read the article

  • Need advice on cron job'ing a very large process

    - by Arms
    I have a PHP script that grabs data from an external service and saves data to my database. I need this script to run once every minute for every user in the system (of which I expect to be thousands). My question is, what's the most efficient way to run this per user, per minute? At first I thought I would have a function that grabs all the user Ids from my database, iterate over the ids and perform the task for each one, but I think that as the number of users grow, this will take longer, and no longer fall within 1 minute intervals. Perhaps I should queue the user Ids, and perform the task individually for each one? In which case, I'm actually unsure of how to proceed. Thanks in advance for any advice.

    Read the article

  • SQL version control methodology

    - by Tom H.
    There are several questions on SO about version control for SQL and lots of resources on the web, but I can't find something that quite covers what I'm trying to do. First off, I'm talking about a methodology here. I'm familiar with the various source control applications out there and I'm familiar with tools like Red Gate's SQL Compare, etc. and I know how to write an application to check things in and out of my source control system automatically. If there is a tool which would be particularly helpful in providing a whole new methodology or which have a useful and uncommon functionality then great, but for the tasks mentioned above I'm already set. The requirements that I'm trying to meet are: The database schema and look-up table data are versioned DML scripts for data fixes to larger tables are versioned A server can be promoted from version N to version N + X where X may not always be 1 Code isn't duplicated within the version control system - for example, if I add a column to a table I don't want to have to make sure that the change is in both a create script and an alter script The system needs to support multiple clients who are at various versions for the application (trying to get them all up to within 1 or 2 releases, but not there yet) Some organizations keep incremental change scripts in their version control and to get from version N to N + 3 you would have to run scripts for N-N+1 then N+1-N+2 then N+2-N+3. Some of these scripts can be repetitive (for example, a column is added but then later it is altered to change the data type). We're trying to avoid that repetitiveness since some of the client DBs can be very large, so these changes might take longer than necessary. Some organizations will simply keep a full database build script at each version level then use a tool like SQL Compare to bring a database up to one of those versions. The problem here is that intermixing DML scripts can be a problem. Imagine a scenario where I add a column, use a DML script to fill said column, then in a later version that column name is changed. Perhaps there is some hybrid solution? Maybe I'm just asking for too much? Any ideas or suggestions would be greatly appreciated though. If the moderators think that this would be more appropriate as a community wiki, please let me know. Thanks!

    Read the article

  • Is it a problem if i query again and again to SQL Server 2005 and 2000?

    - by learner
    Window app i am constructing is for very low end machines (Celeron with max 128 RAM). From the following two approaches which one is the best (I don't want that application becomes memory hog for low end machines):- Approach One:- Query the database Select GUID from Table1 where DateTime <= @givendate which is returning me more than 300 thousands records (but only one field i.e. GUID - 300 thousands GUIDs). Now running a loop to achieve next process of this software based on GUID. Second Approach:- Query the database Select Top 1 GUID from Table1 where DateTime <= @givendate with top 1 again and again until all 300 thousands records done. It will return me only one GUID at a time, and I can do my next step of operation. What do you suggest which approach will use the less Memory Resources?? (Speed / performance is not the issue here).

    Read the article

  • php class crash course

    - by rabidmachine9
    Hello people, I'm sorry for this question but I'm getting crazy trying to write my first php clash, it is supposed to connect me to server and to a database but I'm always getting this error: Parse error: syntax error, unexpected ',', expecting T_PAAMAYIM_NEKUDOTAYIM in /Applications/XAMPP/xamppfiles/htdocs/classTest/test.php on line 9 thanks in advance, here is the code: <?php // include 'DBConnect.php'; class DBConnect { function connection($hostname = "localhost", $myname = "root", $pass = ''){ mysql_connect(&hostname, &myname, &pass) or die("Could not connect!"); //Connect to mysql } function choose($dbnam){ mysql_select_db(&dbnam) or die("Couldn't find database!"); //fnd specific db } } $a = new DBConnect(); $connect = $a->connection("localhost", "root"); $a->choose('mydb'); ?>

    Read the article

  • Entity framework Update fails when object is linked to a missing child

    - by McKay
    I’m having trouble updating an objects child when the object has a reference to a nonexising child record. eg. Tables Car and CarColor have a relationship. Car.CarColorId CarColor.CarColorId If I load the car with its color record like so this var result = from x in database.Car.Include("CarColor") where x.CarId = 5 select x; I'll get back the Car object and it’s Color object. Now suppose that some time ago a CarColor had been deleted but the Car record in question still contains the CarColorId value. So when I run the query the Color object is null because the CarColor record didn’t exist. My problem here is that when I attach another Color object that does exist I get a Store update, insert error when saving. Car.Color = newColor Database.SaveChanges(); It’s like the context is trying to delete the nonexisting color. How can I get around this?

    Read the article

  • Why Hibernates ignores the name attribute of the @Column annotation?

    - by svachon
    Using Hibernate 3.3.1 and Hibernate Annotations 3.4, the database is DB2/400 V6R1, running that on WebSphere 7.0.0.9 I have the following class @Entity public class Ciinvhd implements Serializable { @Id private String ihinse; @Id @Column(name="IHINV#") private BigDecimal ihinv; .... } For reasons I can't figure, Hibernate ignores the specified column name and uses 'ihinv' to generate the SQL: select ciinvhd0_.ihinse as ihinse13_, ciinvhd0_.ihinv as ihinv13_, ... Which of course gives me the following error: Column IHINV not in table CIINVHD Did anyone had this problem before? I have other entities that are very alike in the way that they are using # in their database field names and that are part of the PK and I don't have this problem with them.

    Read the article

  • Is it possible to output other formats than .docx and .odt with TinyButStrong and OpenTPS plugin

    - by Corum
    I have a module which merge document from database records and .docx or .odt document model. I have to output .docx, .odt or .pdf. For outputing MS and Open format, there is no problem, all work properly. But what I want to know is, if I can output something (like XML or HTML) which I can use after to build a PDF document? If I can't, are there any libraries which provide merge document like : DOCX (or ODT) + database record => PDF And I don't want use phplivedocx.

    Read the article

  • I can't delete record in Codeigniter

    - by jomblo
    I'm learning CRUD in codeigniter. I have table name "posting" and the coloumns are like this (id, title, post). I successed to create a new post (both insert into database and display in the view). But I have problem when I delete my post in the front-end. Here is my code: Model Class Post_Model extends CI_Model{ function index(){ //Here is my homepage code } function delete_post($id) { $this->db->where('id', $id); $this->db->delete('posting'); } } Controller Class Post extends CI_Controller{ function delete() { $this->load->model('Post_Model'); $this->Post_Model->delete_post("id"); redirect('Post/index/', 'refresh'); } } After click "delete" in the homepage, there was nothing happens. While I'm looking into my database, my records still available. Note: (1) to delete record, I'm following the codeigniter manual / user guide, (2) I found a message error (Undefined variable: id) after hiting the "delete" button in the front-end Any help or suggestion, please

    Read the article

  • Intersection() and Except() is too slow with large collections of custom objects

    - by Theo
    I am importing data from another database. My process is importing data from a remote DB into a List<DataModel> named remoteData and also importing data from the local DB into a List<DataModel> named localData. I am then using LINQ to create a list of records that are different so that I can update the local DB to match the data pulled from remote DB. Like this: var outdatedData = this.localData.Intersect(this.remoteData, new OutdatedDataComparer()).ToList(); I am then using LINQ to create a list of records that no longer exist in remoteData, but do exist in localData, so that I delete them from local database. Like this: var oldData = this.localData.Except(this.remoteData, new MatchingDataComparer()).ToList(); I am then using LINQ to do the opposite of the above to add the new data to the local database. Like this: var newData = this.remoteData.Except(this.localData, new MatchingDataComparer()).ToList(); Each collection imports about 70k records, and each of the 3 LINQ operation take between 5 - 10 minutes to complete. How can I make this faster? Here is the object the collections are using: internal class DataModel { public string Key1{ get; set; } public string Key2{ get; set; } public string Value1{ get; set; } public string Value2{ get; set; } public byte? Value3{ get; set; } } The comparer used to check for outdated records: class OutdatedDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { var e = string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2) && ( !string.Equals(x.Value1, y.Value1) || !string.Equals(x.Value2, y.Value2) || x.Value3 != y.Value3 ); return e; } public int GetHashCode(DataModel obj) { return 0; } } The comparer used to find old and new records: internal class MatchingDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { return string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2); } public int GetHashCode(DataModel obj) { return 0; } }

    Read the article

  • Sync framework 2.0 smart device to server

    - by Oll
    We have a requirement very similar to that shown in the Occasionally Connected Application (OCA) diagram on the Introduction to Sync Framework Database Synchronization article. We can't however find any examples on how the clients at the bottom are syncing between each other. Particularly, how a smart device syncs with another client each having a local .sdf database. There are lots of examples of a smart device to a server over wcf where the server is running full sql but not smart device to server with local cache. Does anyone have any ideas or examples. Thanks

    Read the article

< Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >