Search Results

Search found 30474 results on 1219 pages for 'relational database'.

Page 601/1219 | < Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >

  • Batch To Bash Conversion

    - by Steven
    I need to know this Batch Script into Bash : @echo off set /p name= Name? findstr /m "%name%" ndatabase.txt if %errorlevel%==0 ( cls echo The name is found in the database! pause >nul exit ) cls echo. echo Name not found in database. pause >nul exit I am new to the Linux Kernel, so starting off with an easy distro - Ubuntu 12.10. My problem is that I do not really know much of Bash Script, since I am very accustomed to the Batch Script format; which is obviously a bad habit for my C++.

    Read the article

  • How to delete unused sequences?

    - by user1023877
    We are using PostgreSQL. My requirement is to delete unused sequences from my database. For example, if I create any table through my application, one sequence will be created, but for deleting the table we are not deleting the sequence, too. If want to create the same table another sequence is being created. Example: table: file; automatically created sequence for id coumn: file_id_seq When I delete the table file and create it with same name again, a new sequence is being created (i.e. file_id_seq1). I have accumulated a huge number of unused sequences in my application database this way. How to delete these unused sequences?

    Read the article

  • Unique identification string in php

    - by NardCake
    Currently me and my friend are developing a website, for what we will call 'projects' we just have a basic auto increment id in the database used to navigate to projects such as oururl.com/viewproject?id=1 but we started thinking, if we have alot of posted projects thats going to be a LONG url. So we need to somehow randomly generate a alphanumerical string about 6 characters long. We want the chance of the string being duplicated being extremely low and of course we will query the database before assigning an identifier. Thanks for anyhelp, means alot!

    Read the article

  • Dealing with large number of text strings

    - by Fadrian
    My project when it is running, will collect a large number of string text block (about 20K and largest I have seen is about 200K of them) in short span of time and store them in a relational database. Each of the string text is relatively small and the average would be about 15 short lines (about 300 characters). The current implementation is in C# (VS2008), .NET 3.5 and backend DBMS is Ms. SQL Server 2005 Performance and storage are both important concern of the project, but the priority will be performance first, then storage. I am looking for answers to these: Should I compress the text before storing them in DB? or let SQL Server worry about compacting the storage? Do you know what will be the best compression algorithm/library to use for this context that gives me the best performance? Currently I just use the standard GZip in .NET framework Do you know any best practices to deal with this? I welcome outside the box suggestions as long as it is implementable in .NET framework? (it is a big project and this requirements is only a small part of it) EDITED: I will keep adding to this to clarify points raised I don't need text indexing or searching on these text. I just need to be able to retrieve them in later stage for display as a text block using its primary key. I have a working solution implemented as above and SQL Server has no issue at all handling it. This program will run quite often and need to work with large data context so you can imagine the size will grow very rapidly hence every optimization I can do will help.

    Read the article

  • Using a large list of terms, search through page text and replace words with links

    - by dunc
    A while ago I posted this question asking if it's possible to convert text to HTML links if they match a list of terms from my database. I have a fairly huge list of terms - around 6000. The accepted answer on that question was superb, but having never used XPath, I was at a loss when problems started occurring. At one point, after fiddling with code, I somehow managed to add over 40,000 random characters to our database - the majority of which required manual removal. Since then I've lost faith in that idea and the more simple PHP solutions simply weren't efficient enough to deal with the amount of data and the quantity of terms. My next attempt at a solution is to write a JS script which, once the page has loaded, retrieves the terms and matches them against the text on a page. This answer has an idea which I'd like to attempt. I would use AJAX to retrieve the terms from the database, to build an object such as this: var words = [ { word: 'Something', link: 'http://www.something.com' }, { word: 'Something Else', link: 'http://www.something.com/else' } ]; When the object has been built, I'd use this kind of code: //for each array element $.each(words, function() { //store it ("this" is gonna become the dom element in the next function) var search = this; $('.message').each( function() { //if it's exactly the same if ($(this).text() === search.word) { //do your magic tricks $(this).html('<a href="' + search.link + '">' + search.link + '</a>'); } } ); } ); Now, at first sight, there is a major issue here: with 6,000 terms, will this code be in any way efficient enough to do what I'm trying to do?. One option would possibly be to perform some of the overhead within the PHP script that the AJAX communicates with. For instance, I could send the ID of the post and then the PHP script could use SQL statements to retrieve all of the information from the post and match it against all 6,000 terms.. then the return call to the JavaScript could simply be the matching terms, which would significantly reduce the number of matches the above jQuery would make (around 50 at most). I have no problem with the script taking a few seconds to "load" on the user's browser, as long as it isn't impacting their CPU usage or anything like that. So, two questions in one: Can I make this work? What steps can I take to make it as efficient as possible? Thanks in advance,

    Read the article

  • Drupal Error After Logging In

    - by Kim
    Hello everyone. I'm kinda new to using drupal and i'm just wondering why I kind of get this error on my new site. See i have this website under WampServer running drupal6-16. Everytime I log in with my pre-created admin account 'admin01' pass: 'admin01' i get redirected to the WampServer localhost which appears to be unusual since the header does not have the WampServer logo. I already tried creating a new drupal website with the same database and the same thing happens. Also, I tried creating another website with a new database but I copied the other website's theme and other contents and the same thing happens. Help me please. I am losing my grip on this. :( Note: I have the same website running on one PC and i am just trying to run it on another PC by copying all its contents. The original copy is working perfectly but I can't seem to get the hook on my new copies to work on other PCs.

    Read the article

  • How do I split the output from mysqldump into smaller files?

    - by lindelof
    I need to move entire tables from one MySQL database to another. I don't have full access to the second one, only phpMyAdmin access. I can only upload (compressed) sql files smaller than 2MB. But the compressed output from a mysqldump of the first database's tables is larger than 10MB. Is there a way to split the output from mysqldump into smaller files? I cannot use split(1) since I cannot cat(1) the files back on the remote server. Or is there another solution I have missed? Edit The --extended-insert=FALSE option to mysqldump suggested by the first poster yields a .sql file that can then be split into importable files, provided that split(1) is called with a suitable --lines option. By trial and error I found that bzip2 compresses the .sql files by a factor of 20, so I needed to figure out how many lines of sql code correspond roughly to 40MB.

    Read the article

  • How do manage the limit of executions to be done by hour (Max: 1000 requests per hour) without a dat

    - by cslavoie
    I am currently developing a script in PHP to fetch webpages. The fact is that by doing so, I occasionally do too much requests to a particular website. In order to control any overflow, I would like to keep trace of how many requests have been done in the last hour or so for each domain. It doesn't need to be perfect, just a good estimate. I doesn't have access to a database, except sqlite2. I would really like something really simple because there will typically be a lot of updates, which is kind of heavy for a sqlite database. If no one has a magical solution, I'll go for sqlite, but I was curious what you can come up with Thank you very much

    Read the article

  • Common optimization rules

    - by mafutrct
    This is a dangerous question, so let me try to phrase it correctly. Premature optimization is the root of all evil, but if you know you need it, there is a basic set of rules that should be considered. This set is what I'm wondering about. For instance, imagine you got a list of a few thousand items. How do you look up an item with a specific, unique ID? Of course, you simply use a Dictionary to map the ID to the item. And if you know that there is a setting stored in a database that is required all the time, you simply cache it instead of issuing a database request hundred times a second. I guess there are a few even more basic ideas. I am specifically not looking for "don't do it, for experts: don't do it yet" or "use a profiler" answers, but for really simple, general hints. If you feel this is an argumentative question, you probably misunderstood my intention.

    Read the article

  • iPhone SDK: Data Synchronization

    - by buzzappsoftware
    I am looking for an overview of data synchronization techniques available on the iPhone platform. We need the ability to be able to sync a subset of content from a server to a local database residing on the iPhone. On other projects I have worked on, the data synchronization was handled by the database. Is that available in SQLite? If not, any suggestions on techniques? Rolling our own would not be my first choice. Thanks in advance.

    Read the article

  • Open Source PHP search engine

    - by Ravi Gupta
    I am looking for an open source search engine plugin written in php for my website(eCommerce). Before anybody answer that I have a doubt regarding the search engine. Usually search engine crawl web pages, create indexes and then use them while looking for contents. But will the same model work for eCommerce websites? Yeah, it can crawl products pages, index them but don't you think it would be better if it crawls the database directly and index the products stored in the database? And when a user search for any product, it will simply give us the rows of the table which matches the user query? May be what I am asking is a stupid question but I am new to web development, so kindly help me to understand the concept. I have looked at a search engine called Sphider but didn't get what all I have to do to make it work with an eCommerce website.

    Read the article

  • SQL To Filter A SubSet Of Data

    - by Nick LaMarca
    I have an arraylist that holds a subset of names found in my database. I need to write a query to get a count of the people in the arraylist for certain sections i.e. There is a field "City" in my database from the people in the arraylist of names I want to know how many of them live in Chicago, how many live in New York etc. Can someone help me how I might set up an sql statement to handle this. I think somehow I have to pass the subset of names to sql somehow.

    Read the article

  • Why Hibernates ignores the name attribute of the @Column annotation?

    - by svachon
    Using Hibernate 3.3.1 and Hibernate Annotations 3.4, the database is DB2/400 V6R1, running that on WebSphere 7.0.0.9 I have the following class @Entity public class Ciinvhd implements Serializable { @Id private String ihinse; @Id @Column(name="IHINV#") private BigDecimal ihinv; .... } For reasons I can't figure, Hibernate ignores the specified column name and uses 'ihinv' to generate the SQL: select ciinvhd0_.ihinse as ihinse13_, ciinvhd0_.ihinv as ihinv13_, ... Which of course gives me the following error: Column IHINV not in table CIINVHD Did anyone had this problem before? I have other entities that are very alike in the way that they are using # in their database field names and that are part of the PK and I don't have this problem with them.

    Read the article

  • How to store matrix information in MySQL?

    - by dedalo
    Hi, I'm working on an application that analizes music similarity. In order to do that I proccess audio data and store the results in txt files. For each audio file I create 2 files, 1 containing and 16 values (each value can be like this:2.7000023942731723) and the other file contains 16 rows, each row containing 16 values like the one previously shown. I'd like to store the contents of these 2 file in a table of my MySQL database. My table looks like: Name varchar(100) Author varchar (100) in order to add the content of those 2 file I think I need to use the BLOB data type: file1 blob file2 blob My question is how should I store this info in the data base? I'm working with Java where I have a double array containing the 16 values (for the file1) and a matrix containing the file2 info. Should I process the values as strings and add them to the columns in my database? Thanks

    Read the article

  • Is it a problem if i query again and again to SQL Server 2005 and 2000?

    - by learner
    Window app i am constructing is for very low end machines (Celeron with max 128 RAM). From the following two approaches which one is the best (I don't want that application becomes memory hog for low end machines):- Approach One:- Query the database Select GUID from Table1 where DateTime <= @givendate which is returning me more than 300 thousands records (but only one field i.e. GUID - 300 thousands GUIDs). Now running a loop to achieve next process of this software based on GUID. Second Approach:- Query the database Select Top 1 GUID from Table1 where DateTime <= @givendate with top 1 again and again until all 300 thousands records done. It will return me only one GUID at a time, and I can do my next step of operation. What do you suggest which approach will use the less Memory Resources?? (Speed / performance is not the issue here).

    Read the article

  • How check user online status in web site?

    - by Milan Sanda Sri
    how can i get know when user online and offline. when one user log in to my site. i set a script to change database table field of that user, as a boolean to indicates users online states. but my problem is if he/she leaved my site without clicking the log-out button, then my script does not work and database show he/she as a online user. please give me any sugestion to fix this. i have no idea what to do! i check some answers on this topic, but most of says asnwer lie this -- if last activity time is less than now+15 minutes then user is online, offline otherwise. but i have seen some social networking sites shows that we have gone offline, just we close the browser. how they do that ?

    Read the article

  • Run a site on Scheme

    - by Lajla
    I can't find this on Google (so maybe it doesn't exist), but I basically'd like to install something on a web server such that I can run a site on Scheme, PHP is starting to annoy me, I want to get rid off it, what I want is: Run Scheme sources towards UTF-8 output (duh) Support for SXML, SXLT et cetera, I plan to compose the damned thing in SXML and - to normal representation on at the end. Ability to read other files from the server, write them, set permissions et cetera Also some things to for instance determine the filesize of files, height of images, mime-types and all that mumbo-jumbo (optionally) connect to a database, but for what I want to do storing the entire database in S-expressions itself is feasible enough I don't need any fancy libraries and other things that come with it like CMS'es and what-not, except the support for SXML but I'm sure I can just find a lib for that anyway that I can load.

    Read the article

  • Sync framework 2.0 smart device to server

    - by Oll
    We have a requirement very similar to that shown in the Occasionally Connected Application (OCA) diagram on the Introduction to Sync Framework Database Synchronization article. We can't however find any examples on how the clients at the bottom are syncing between each other. Particularly, how a smart device syncs with another client each having a local .sdf database. There are lots of examples of a smart device to a server over wcf where the server is running full sql but not smart device to server with local cache. Does anyone have any ideas or examples. Thanks

    Read the article

  • Github + keep file but dont track changes

    - by Mike
    I have a codeigniter framework thats using github. Within this application I have several files that i will want to have in the repo but not track any changes on.. Example is: i deploy a new installation of this framework to a new client, i want the following files to be downloaded (they have default values 'CHANGEME') and i just have to make changes specific to this client IE(database credentials, email address info, custom css styling). // the production config files i want the files but they need to be updated to specific client needs application/config/production/config.php application/config/production/database.php application/config/production/tank_auth.php // index page, defines the environment (production|development) /index.php // all of the css/js cache (keep the folder but not the contents) /assets/cache/* // production user based styling (color, fonts etc) needs to be updated specific to client needs /assets/frontend/css/user/frontend-user.css currently if i run git clone [email protected]:user123/myRepo.git httpdocs and then i edit the files above, all is great.. until i release a hotfix or patch and run git pull. All of my changes are then overwritten.

    Read the article

  • Possible to not use ID field but another column name? in Lift

    - by bstevens90
    I am connected to a oracle database from a scala/lift webapp. I have been able to successfully pull information from the database as I wished but am having one issue. For each table I want to access I am required to add an ID field so that the app will work with the trait IdPK. What mapper class or trait can I use to override this? I have been trying to find one but been unable to locate it. Figured people have not always had an ID field on every table they make that is just called ID... class DN_REC extends LongKeyedMapper[DN_REC] with IdPK { def getSingleton = DN_REC object dn_rec_id extends MappedInt(this){ } This is what I am talking about. I would like to use the dn_rec_id as my primary key as it is on the table. Thanks

    Read the article

  • Entity framework Update fails when object is linked to a missing child

    - by McKay
    I’m having trouble updating an objects child when the object has a reference to a nonexising child record. eg. Tables Car and CarColor have a relationship. Car.CarColorId CarColor.CarColorId If I load the car with its color record like so this var result = from x in database.Car.Include("CarColor") where x.CarId = 5 select x; I'll get back the Car object and it’s Color object. Now suppose that some time ago a CarColor had been deleted but the Car record in question still contains the CarColorId value. So when I run the query the Color object is null because the CarColor record didn’t exist. My problem here is that when I attach another Color object that does exist I get a Store update, insert error when saving. Car.Color = newColor Database.SaveChanges(); It’s like the context is trying to delete the nonexisting color. How can I get around this?

    Read the article

  • Does sending mails with mail() hide the recipieints address

    - by user161179
    I am trying to build a email messaging system for a classified site ( a la craigslist), so that users can email each other. emails of registered users are stored in a database. What I want is for the recipients email address to be hidden from the sender's . If I just use the mail() function and dynamically get the recipients email from the database, will this email be visible to the person sending the mail ?? if the recipients email is indeed hidden from the sender's when using mail() this way, then why does craigslist anonymize's email ? isn't it already anonymous ? Edit: so the email won't be visible to the person filling the form. SO the question remains is why does craigslist anonymizes email addresses? and whether I should implement the same ?

    Read the article

  • how to avoid storing several times a repeated field of a Symfony form?

    - by user454760
    Hello everybody, I am working with Symfony 1.4 and Doctrine. I have a model A with an email field. The form of A displays an input in which the user should insert the email correctly. But as everybody knows, sometimes they don't do it. To fix this I have inserted an extra field in the model (and in the form), called *repeat_email* to prevent the misspellings. Then, in the validation process, after validating all the fields, i use a global validator to compare the data of the two fields. This works, but I don't want to have the email stored two times in the database (I don't want the *repeat_email*). Is there any mechanism to use it in the validation process, but not to store it in the database? Thanks,

    Read the article

  • Intersection() and Except() is too slow with large collections of custom objects

    - by Theo
    I am importing data from another database. My process is importing data from a remote DB into a List<DataModel> named remoteData and also importing data from the local DB into a List<DataModel> named localData. I am then using LINQ to create a list of records that are different so that I can update the local DB to match the data pulled from remote DB. Like this: var outdatedData = this.localData.Intersect(this.remoteData, new OutdatedDataComparer()).ToList(); I am then using LINQ to create a list of records that no longer exist in remoteData, but do exist in localData, so that I delete them from local database. Like this: var oldData = this.localData.Except(this.remoteData, new MatchingDataComparer()).ToList(); I am then using LINQ to do the opposite of the above to add the new data to the local database. Like this: var newData = this.remoteData.Except(this.localData, new MatchingDataComparer()).ToList(); Each collection imports about 70k records, and each of the 3 LINQ operation take between 5 - 10 minutes to complete. How can I make this faster? Here is the object the collections are using: internal class DataModel { public string Key1{ get; set; } public string Key2{ get; set; } public string Value1{ get; set; } public string Value2{ get; set; } public byte? Value3{ get; set; } } The comparer used to check for outdated records: class OutdatedDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { var e = string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2) && ( !string.Equals(x.Value1, y.Value1) || !string.Equals(x.Value2, y.Value2) || x.Value3 != y.Value3 ); return e; } public int GetHashCode(DataModel obj) { return 0; } } The comparer used to find old and new records: internal class MatchingDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { return string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2); } public int GetHashCode(DataModel obj) { return 0; } }

    Read the article

< Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >