Search Results

Search found 70568 results on 2823 pages for 'ef database first'.

Page 342/2823 | < Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >

  • error in arabic script in mysql

    - by fusion
    i inserted data in mysql database which includes arabic script. while the output displays arabic correctly, the data in mysql looks like garbage. something like this: '&#1589;&#1614;&#1608;&#1605;&#1615; &#1579;&#1614;&#1604;&#1575;&#1579;&#1614;&#1577;&#1616; &#1571;&#1610;&#1617;&#1575;&#1605;&#1613; &#1605;&#1616;&#1606; &#1603;&#1615;&#1604;&#1617;&#1616; &#1588;&#1614;&#1607;&#1585;&#1613; &#1600; &#1571;&#1585;&#1576;&#1614;&#1593;&#1575;&#1569;&#1615; &#1576;&#1614;&#1610;&#1606;&#1614; &#1582;&#1614; should i be worried about this? if yes, how do i make it appear in proper arabic script in mysql? thanks.

    Read the article

  • Django forms I cannot save picture file

    - by dana
    i have the model: class OpenCv(models.Model): created_by = models.ForeignKey(User, blank=True) first_name = models.CharField(('first name'), max_length=30, blank=True) last_name = models.CharField(('last name'), max_length=30, blank=True) url = models.URLField(verify_exists=True) picture = models.ImageField(help_text=('Upload an image (max %s kilobytes)' %settings.MAX_PHOTO_UPLOAD_SIZE),upload_to='jakido/avatar',blank=True, null= True) bio = models.CharField(('bio'), max_length=180, blank=True) date_birth = models.DateField(blank=True,null=True) domain = models.CharField(('domain'), max_length=30, blank=True, choices = domain_choices) specialisation = models.CharField(('specialization'), max_length=30, blank=True) degree = models.CharField(('degree'), max_length=30, choices = degree_choices) year_last_degree = models.CharField(('year last degree'), max_length=30, blank=True,choices = year_last_degree_choices) lyceum = models.CharField(('lyceum'), max_length=30, blank=True) faculty = models.ForeignKey(Faculty, blank=True,null=True) references = models.CharField(('references'), max_length=30, blank=True) workplace = models.ForeignKey(Workplace, blank=True,null=True) objects = OpenCvManager() the form: class OpencvForm(ModelForm): class Meta: model = OpenCv fields = ['first_name','last_name','url','picture','bio','domain','specialisation','degree','year_last_degree','lyceum','references'] and the view: def save_opencv(request): if request.method == 'POST': form = OpencvForm(request.POST, request.FILES) # if 'picture' in request.FILES: file = request.FILES['picture'] filename = file['filename'] fd = open('%s/%s' % (MEDIA_ROOT, filename), 'wb') fd.write(file['content']) fd.close() if form.is_valid(): new_obj = form.save(commit=False) new_obj.picture = form.cleaned_data['picture'] new_obj.created_by = request.user new_obj.save() return HttpResponseRedirect('.') else: form = OpencvForm() return render_to_response('opencv/opencv_form.html', { 'form': form, }, context_instance=RequestContext(request)) but i don't seem to save the picture in my database... something is wrong, and i can't figure out what :(

    Read the article

  • PHP - Drilling down Data and Looping with Loops

    - by stogdilla
    I'm currently having difficulty finding a way of making my loops work. I have a table of data with 15 minute values. I need the data to pull up in a few different increments $filters=Array('Yrs','Qtr','Day','60','30','15'); I think I have a way of finding out what I need to be able to drill down to but the issue I'm having is after the first loop to cycle through all the Outter most values (ex: the user says they want to display by Hours, each hour should be able to have a "+" that will then add a new div to display the half hour data, then each half hour data have a "+" to display the 15 minute data upon request. Now I can just program the number of outputs for each value (6 different outputs) just in-case... but isn't there a way I can make it do the drill down for each one in a loop? so I only have to code one output once and have it just check if there are any more intervals after it and check for those? I'm sure I'm just overlooking some very simple way of doing this but my brain isn't being clever today. Sorry in advance if this is a simple solution. I guess the best way I could think of it as a reply on a form. How you would check to see if it's a reply of a reply, and then if that reply has any replys...etc for output. Can anyone help or at least point me in the right direction? Or am I stuck coding each possible check? Thanks in advance!

    Read the article

  • Why does animate not work for the first few 'selections' of my next button?

    - by Josiah
    I'll just start right off the bat and say I'm fairly new to JQuery, so if you see some glaring issues with my code....let me know what I'm doing wrong! Either way, I've been working on a script to fade divs in and out using the z-index and animate. It "works" after about 2-3 clicks, but the first two clicks do not fade or animate as I was hoping....why is that? I'll just throw javascript up here, but if you need/want more code, just let me know. Thanks! $(document).ready(function() { //Slide rotation/movement variables var first = $('#main div.slide:first').attr('title'); var last = $('#main div.slide').length; //Needed for the next/prev buttons var next; //Set the first div to the front, and variable for first div var $active = $('#main div.slide[title='+first+']'); //Hide the links until the div is hovered over and take them away when mouse leaves $('#main').children('a').hide(); $('#main').mouseenter(function() { $('#main').children('a').fadeIn(750); }).mouseleave(function() { $('#main').children('a').fadeOut(750); }); $active.css('z-index', '4'); $('#main #next').click(function() { if ((next = parseInt($active.attr('title')) + 1) > last) { next = 1; } $active.css('z-index', '0').stop().animate({opacity: 0}, 1000); $active = $('#main div[title='+next+']').css('z-index', '4').stop().animate({opacity : 1}, 1000); }); });

    Read the article

  • "Thread was being aborted" 0n large dataset

    - by Donaldinio
    I am trying to process 114,000 rows in a dataset (populated from an oracle database). I am hitting an error at around the 600 mark - "Thread was being aborted". All I am doing is reading the dataset, and I still hit the issue. Is this too much data for a dataset? It seems to load into the dataset ok though. I welcome any better ways to process this amount of data. rootTermsTable = entKw.GetRootKeywordsByCategory(catID); for (int k = 0; k < rootTermsTable.Rows.Count; k++) { string keywordID = rootTermsTable.Rows[k]["IK_DBKEY"].ToString(); ... } public DataTable GetKeywordsByCategory(string categoryID) { DbProviderFactory provider = DbProviderFactories.GetFactory(connectionProvider); DbConnection con = provider.CreateConnection(); con.ConnectionString = connectionString; DbCommand com = provider.CreateCommand(); com.Connection = con; com.CommandText = string.Format("Select * From icm_keyword WHERE (IK_IC_DBKEY = {0})",categoryID); com.CommandType = CommandType.Text; DataSet ds = new DataSet(); DbDataAdapter ad = provider.CreateDataAdapter(); ad.SelectCommand = com; con.Open(); ad.Fill(ds); con.Close(); DataTable dt = new DataTable(); dt = ds.Tables[0]; return dt; //return ds.Tables[0].DefaultView; }

    Read the article

  • Help Optimizing MySQL Table (~ 500,000 records) and PHP Code.

    - by Pyrite
    I have a MySQL table that collects player data from various game servers (Urban Terror). The bot that collects the data runs 24/7, and currently the table is up to about 475,000+ records. Because of this, querying this table from PHP has become quite slow. I wonder what I can do on the database side of things to make it as optomized as possible, then I can focus on the application to query the database. The table is as follows: CREATE TABLE IF NOT EXISTS `people` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(40) NOT NULL, `ip` int(4) unsigned NOT NULL, `guid` varchar(32) NOT NULL, `server` int(4) unsigned NOT NULL, `date` int(11) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `Person` (`name`,`ip`,`guid`), KEY `server` (`server`), KEY `date` (`date`), KEY `PlayerName` (`name`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COMMENT='People that Play on Servers' AUTO_INCREMENT=475843 ; I'm storying the IPv4 (ip and server) as 4 byte integers, and using the MySQL functions NTOA(), etc to encode and decode, I heard that this way is faster, rather than varchar(15). The guid is a md5sum, 32 char hex. Date is stored as unix timestamp. I have a unique key on name, ip and guid, as to avoid duplicates of the same player. Do I have my keys setup right? Is the way I'm storing data efficient? Here is the code to query this table. You search for a name, ip, or guid, and it grabs the results of the query and cross references other records that match the name, ip, or guid from the results of the first query, and does it for each field. This is kind of hard to explain. But basically, if I search for one player by name, I'll see every other name he has used, every IP he has used and every GUID he has used. <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> Search: <input type="text" name="query" id="query" /><input type="submit" name="btnSubmit" value="Submit" /> </form> <?php if (!empty($_POST['query'])) { ?> <table cellspacing="1" id="1up_people" class="tablesorter" width="300"> <thead> <tr> <th>ID</th> <th>Player Name</th> <th>Player IP</th> <th>Player GUID</th> <th>Server</th> <th>Date</th> </tr> </thead> <tbody> <?php function super_unique($array) { $result = array_map("unserialize", array_unique(array_map("serialize", $array))); foreach ($result as $key => $value) { if ( is_array($value) ) { $result[$key] = super_unique($value); } } return $result; } if (!empty($_POST['query'])) { $query = trim($_POST['query']); $count = 0; $people = array(); $link = mysql_connect('localhost', 'mysqluser', 'yea right!'); if (!$link) { die('Could not connect: ' . mysql_error()); } mysql_select_db("1up"); $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name LIKE \"%$query%\" OR INET_NTOA(ip) LIKE \"%$query%\" OR guid LIKE \"%$query%\")"; $result = mysql_query($sql, $link); if (!$result) { die(mysql_error()); } // Now take the initial results and parse each column into its own array while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } // now for each name, ip, guid in results, find additonal records $people2 = array(); foreach ($people AS $person) { $ip = $person['ip']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (ip = \"$ip\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people2[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people3 = array(); foreach ($people AS $person) { $guid = $person['guid']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (guid = \"$guid\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people3[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people4 = array(); foreach ($people AS $person) { $name = $person['name']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name = \"$name\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people4[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } // Combine people and people2 into just people $people = array_merge($people, $people2); $people = array_merge($people, $people3); $people = array_merge($people, $people4); $people = super_unique($people); foreach ($people AS $person) { $date = ($person['date']) ? date("M d, Y", $person['date']) : 'Before 8/1/10'; echo "<tr>\n"; echo "<td>".$person['id']."</td>"; echo "<td>".$person['name']."</td>"; echo "<td>".$person['ip']."</td>"; echo "<td>".$person['guid']."</td>"; echo "<td>".$person['server']."</td>"; echo "<td>".$date."</td>"; echo "</tr>\n"; $count++; } // Find Total Records //$result = mysql_query("SELECT id FROM 1up_people", $link); //$total = mysql_num_rows($result); mysql_close($link); } ?> </tbody> </table> <p> <?php echo $count." Records Found for \"".$_POST['query']."\" out of $total"; ?> </p> <?php } $time_stop = microtime(true); print("Done (ran for ".round($time_stop-$time_start)." seconds)."); ?> Any help at all is appreciated! Thank you.

    Read the article

  • Echoing users by group name in PHP

    - by BobSapp
    What I want to do is click a name of a group(every group what I create other than poweruser and admin groups) and that will echo all of the users in that group from the database. I have figured out the code so far but now my problem is how will I print it all out when clicking the name of the group? My code so far is: <h3>Groups</h3> <?php include('db.php'); if (isset($_GET["groupID"])) { $sql="SELECT `group`.*, `user`.* FROM `user` inner join `group` on group.groupID=user.groupID where group.groupID= " . mysql_real_escape_string($_GET["groupID"]) ; } else { $sql="SELECT * FROM `group` WHERE groupName <> 'admin' AND groupName <> 'poweruser'" ; } $result=mysql_query($sql,$connection); while($line=mysql_fetch_array($result)){ echo "<a href='index.php?page=groups&group=".$line['groupID']."'>".$line['groupName'].'</a><br />'; } mysql_free_result($result); mysql_close($connection); ?>

    Read the article

  • How would I associate a "Note" class to 4+ classes without creating lookup table for each associatio

    - by Gthompson83
    Im creating a project tasklist application. I have project, section, task, issue classes, and would like to use one class to be able to add simple notes to any object instance of those classes. The task, issue tables both use a standard identity field as a primary key. The section table has a two field primary key. The project table has a single int primary key defined by the user. Is there a way to associate the note class with each of these without using a seperate lookup table for each class? I'm not so sure my original idea is a decent way to implement this. I considered the following (each variable mapping to a field n the notes table. Private _NoteId As Integer Private _ProjectId As Integer Private _SectionId As Integer Private _SectionId2 As Integer Private _TaskId As Integer Private _IssueId As Integer Private _Note As String Private _UserId As Guid Then I would be able to write seperate methods (getProjectNotes, getTaskNotes) to get notes attached to each class. I started writing this a few weeks ago but got pulled away before i could finish. When revisiting this code today my first thought "this is retarded". Thoughts on drawbacks to this design?

    Read the article

  • SQL left join with multiple rows into one row

    - by beardedd
    Basically, I have two tables, Table A contains the actual items that I care to get out, and Table B is used for language translations. So, for example, Table A contains the actual content. Anytime text is used within the table, instead of storing actual varchar values, ids are stored that relate back to text stored in Table B. This allows me to by adding a languageID column to Table B, have multiple translations for the same row in the database. Example: Table A Title (int) Description (int) Other Data.... Table B TextID (int) - This is the column whose value is stored in other tables LanguageID (int) Text (varchar) My question is more a call for suggestions on how to best handle this. Ideally I want a query that I can use to select from the table, and get the text as opposed to the ids of the text out of the table. Currently when I have two text items in the table this is what I do: SELECT C.ID, C.Title, D.Text AS Description FROM (SELECT A.ID, A.Description, B.Text AS Title FROM TableA A, TranslationsTable B WHERE A.Title = B.TextID AND B.LanguaugeID = 1) C LEFT JOIN TranslationsTable D ON C.Description = D.TextID AND D.LanguaugeID = 1 This query gives me the row from Table A I am looking for (using where statements in the inner select statement) with the actual text based on the language ID used instead of the text ids. This works fine when I am only using one or two text items that need to be translated, but adding a third item or more, it starts to get really messy - essentially another left join on top of the example. Any suggestions on a better query, or at least a good way to handle 3 or more text items in a single row?

    Read the article

  • Connection string problems on shared hosting with sql server 2005 express

    - by dagogo
    hi i have problem connecting to my db on a shared hosting, my host provider says they deployed sql 2005 express on their database and i prepared my connection string as follows to take advantage of sql express. \ the data source nae i used originally was ./SQLExpress but my host provider asked that i change it to local host, although with the former it didnt connect, but still with the change as indicated above the error still comes up on access to my default page. the error is as follows; Server Error in '/' Application. Invalid value for key 'attachdbfilename'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ArgumentException: Invalid value for key 'attachdbfilename'. Source Error: Line 120: Public Function GetID(ByVal sLgaName As String) As Integer Line 121: Dim q As String = "Select PLID " & "From LGA " & "Where LGAName = " & "'" & sLgaName & "'" Line 122: Dim cn As New SqlConnection(Me.ConnectionString) Line 123: Dim cmd As New SqlCommand(q, cn) Line 124: ive read up a lot on the web and googled ma fingers numb on this, i have a deadline to deliver this project and having successfully built the app it frustrating for this to happen. pls help me.

    Read the article

  • How do software projects go over budget and under-deliver?

    - by Carlos
    I've come across this story quite a few times here in the UK: NHS Computer System Summary: We're spunking £12 Billion on some health software with barely anything working. I was sitting the office discussing this with my colleagues, and we had a little think about. From what I can see, all the NHS needs is a database + middle tier of drugs/hospitals/patients/prescriptions objects, and various GUIs for doctors and nurses to look at. You'd also need to think about security and scalability. And you'd need to sit around a hospital/pharmacy/GPs office for a bit to figure out what they need. But, all told, I'd say I could knock together something with that kind of structure in a couple of days, and maybe throw in a month or two to make it work in scale. If I had a few million quid, I could probably hire some really excellent designers to make a maintainable codebase, and also buy appropriate hardware to run the system on. I hate to trivialize something that seems to have caused to much trouble, but to me it looks like just a big distributed CRUD + UI system. So how on earth did this project bloat to £12B without producing much useful software? As I don't think the software sounds so complicated, I can only imagine that something about how it was organised caused this mess. Is it outsourcing that's the problem? Is it not getting the software designers to understand the medical business that caused it? What are your experiences with projects gone over budget, under delivered? What are best practices for large projects? Have you ever worked on such a project?

    Read the article

  • Index question: Select * with WHERE clause. Where and how to create index

    - by Mestika
    Hi, I’m working on optimizing some of my queries and I have a query that states: select * from SC where c_id ="+c_id” The schema of ** SC** looks like this: SC ( c_id int not null, date_start date not null, date_stop date not null, r_t_id int not null, nt int, t_p decimal, PRIMARY KEY (c_id, r_t_id, date_start, date_stop)); My immediate bid on how the index should be created is a covering index in this order: INDEX(c_id, date_start, date_stop, nt, r_t_id, t_p) The reason for this order I base on: The WHERE clause selects from c_id thus making it the first sorting order. Next, the date_start and date_stop to specify a sort of “range” to be defined in these parameters Next, nt because it will select the nt Next the r_t_id because it is a ID for a specific type of my r_t table And last the t_p because it is just a information. I don’t know if it is at all necessary to order it in a specific way when it is a SELECT ALL statement. I should say, that the SC is not the biggest table. I can say how many rows it contains but a estimate could be between <10 and 1000. The next thing to add is, that the SC, in different queries, inserts the data into the SC, and I know that indexes on tables which have insertions can be cost ineffective, but can I somehow create a golden middle way to effective this performance. Don't know if it makes a different but I'm using IBM DB2 version 9.7 database Sincerely Mestika

    Read the article

  • SELECT product from subclass: How many queries do I need?

    - by Stefano
    I am building a database similar to the one described here where I have products of different type, each type with its own attributes. I report a short version for convenience product_type ============ product_type_id INT product_type_name VARCHAR product ======= product_id INT product_name VARCHAR product_type_id INT -> Foreign key to product_type.product_type_id ... (common attributes to all product) magazine ======== magazine_id INT title VARCHAR product_id INT -> Foreign key to product.product_id ... (magazine-specific attributes) web_site ======== web_site_id INT name VARCHAR product_id INT -> Foreign key to product.product_id ... (web-site specific attributes) This way I do not need to make a huge table with a column for each attribute of different product types (most of which will then be NULL) How do I SELECT a product by product.product_id and see all its attributes? Do I have to make a query first to know what type of product I am dealing with and then, through some logic, make another query to JOIN the right tables? Or is there a way to join everything together? (if, when I retrieve the information about a product_id there are a lot of NULL, it would be fine at this point). Thank you

    Read the article

  • Determining Best Table Structure for MySQL Performance

    - by Joe Majewski
    I'm working on a browser-based RPG for one of my websites, and right now I'm trying to determine the best way to organize my SQL tables for performance and maintenance. Here's my question: Does the number of columns in an SQL table affect the speed in which it can be queried? I am not a newbie when it comes to PHP or MySQL. I used to develop things with the common goal of getting them to work, but I've recently advanced to the stage where a functional program is not good enough unless it's fast and reliable. Anyways, right now I have a members table that has around 15 columns. It contains information such as the player's username, password, email, logins, page views, etcetera. It doesn't contain any information on the player's progress in the game, however. If I added columns for things such as army size, gold, turns, and whatnot, then it could easily rise to around 40 or 50 total columns. Oh, and my database structure IS normalized. Will a table with 50 columns that gets constantly queried be a bad idea? Should I split it into two tables; one for the user's general information and one for the user's game statistics? I know I could check the query time myself, but I haven't actually created the tables yet and I think I'd be better off with some professional advice on this important decision for my game. Thank you for your time! :)

    Read the article

  • .tpl files and website problem

    - by whitstone86
    Apologies if the title is in lowercase but it's describing an extension format. I've started using Dwoo as my template engine for PHP, and am not sure how to convert my PHP files into .tpl templates. My site is similar to, but not the same as, http://library.digiguide.com/lib/programme/Medium-319648/Drama/ with its design (except colour scheme and site name are different, plus it's in PHP - so copyright issues are avoided here, the design arguably could be seen as parody even though the content is different. The database is called tvguide, and it has these tables: Programmes House M.D. Medium Police Stop! American Dad! The tablenames of the above programmes are: housemdonair mediumonair policestopair americandad1 Episodes The tablenames for the above programmes' episode guides are: housemdepidata mediumepidata policestopepidata americandad1epidata All of them have the following rows: id (not an auto-increment, since I wish to dynamically generate a page from this) episodename seriesnumber episodenumber episodesynopsis (the above four after id do exactly as stated) I have a pagination script that works, it displays 20 records per page as I want it to. This is called pmcPagination.php - but I won't post it in full since it would take up too much space. However, I'm trying to get it so that variables are filled in like this: (ok, so the examples below are ASP.NET, but if there's a PHP/MySQL equivalent I would gratefully appreciate this!!): http://library.digiguide.com/lib/episode/741168 http://library.digiguide.com/lib/episode/714829 with the episode detail and data. My site works, but it's fairly basic, and it's not online yet until my bugs are fixed. Mod_rewrite is enabled so my site reads as http://mytvguide.com/episode/123456 or http://mytvguide.com/programme/123456 http://mytvguide.com/WorldInAction/123456/Documentary/ I've tried looking on Google, but am not sure how to get this TV guide script to work at its best - but I think .tpl, and .php/MySQL is the way to go. Any advice anyone has on making this project into a fully workable, ready to use site would be much appreciated, I've spent months refining this project! P.S. Apologies for the length of this, hope it describes my project well.

    Read the article

  • Get info from multiple files, match it and then display to end user, what is fastest?

    - by Patrick
    Hi, I need to build a website where we display data that is refreshed every 5minutes in a text file with a | separator. I currently use Java to do this. What I do now: I grab the textfile for every request through the website and process it and then display the data to the end user, this works fine, since Java can go through like 5000 lines of data fast, and when I filter it it is still extremely fast. However now the management wants the following: They added 3 textfiles with the | separator to it, and now want me to also read those files and match the information on certain fields, and if there is a match also display that information to the end user. I think soon enough, although Java is fast, I will run into trouble when 10 people want that information and I have to run through 4 total files matching the information. What can I do to make this process super fast? My Creative solutions so far: -Leave it this way, since Java is fast and end users can wait (probably less then 1second) -Have a background process that dumps new data into a MySQL database every 5minutes, since databases are extremely good at getting the same data from multiple tables. Thank you!

    Read the article

  • Logic: Best way to sample & count bytes of a 100MB+ file

    - by Jami
    Let's say I have this 170mb file (roughly 180 million bytes). What I need to do is to create a table that lists: all 4096 byte combinations found [column 'bytes'], and the number of times each byte combination appeared in it [column 'occurrences'] Assume two things: I can save data very fast, but I can update my saved data very slow. How should I sample the file and save the needed information? Here're some suggestions that are (extremely) slow: Go through each 4096 byte combinations in the file, save each data, but search the table first for existing combinations and update it's values. this is unbelievably slow Go through each 4096 byte combinations in the file, save until 1 million rows of data in a temporary table. Go through that table and fix the entries (combine repeating byte combinations), then copy to the big table. Repeat going through another 1 million rows of data and repeat the process. this is faster by a bit, but still unbelievably slow This is kind of like taking the statistics of the file. NOTE: I know that sampling the file can generate tons of data (around 22Gb from experience), and I know that any solution posted would take a bit of time to finish. I need the most efficient saving process

    Read the article

  • Updating a composite primary key

    - by VBCSharp
    I am struggling with the philosophical discussions about whether or not to use composite primary keys on my SQL Server database. I have always used the surrogate keys in the past and I am challenging myself by leaving my comfort zone to try something different. I have read many discussion but can't come to any kind of solution yet. The struggle I am having is when I have to update a record with the composite PK. For example, the record in questions is like this: ContactID, RoleID, EffectiveDate, TerminationDT. The PK in this case is the ContactID, RoleID, and EffectiveDate. TerminationDT can be null. If in my UI, the user changes the RoleID and then I need to update the record. Using the surrogate key I can do an Update Table Set RoleID = 1 WHERE surrogateID = Z. However, using the Composite Key way, once one of the fields in the composite key changes I have no way to reference the old record to update it without now maintaining somewhere in the UI a reference to the old values. I do not bind datasources in my UI. I open a connection, get the data and store it in a bucket, then close the connection. What are everyone's opinions? Thanks.

    Read the article

  • Pre-done SQLs to be converted to Rails' style moduls

    - by Hoornet
    I am a Rails newbie and would really appreciate if someone converted these SQLs to complete modules for rails. I know its a lot to ask but I can't just use find_by_sql for all of them. Or can I? These are the SQLs (they run on MS-SQL): 1) SELECT STANJA_NA_DAN_POSTAVKA.STA_ID, STP_DATE, STP_TIME, STA_OPIS, STA_SIFRA, STA_POND FROM STANJA_NA_DAN_POSTAVKA INNER JOIN STANJA_NA_DAN ON(STANJA_NA_DAN.STA_ID=STANJA_NA_DAN_POSTAVKA.STA_ID) WHERE ((OSE_ID=10)AND (STANJA_NA_DAN_POSTAVKA.STP_DATE={d '2010-03-30'}) AND (STANJA_NA_DAN_POSTAVKA.STP_DATE<={d '2010-03-30'})) 2) SELECT ZIGI_OBDELANI.OSE_ID, ZIGI_OBDELANI.DOG_ID AS DOG_ID, ZIGI_OBDELANI.ZIO_DATUM AS DATUM, ZIGI_PRICETEK.ZIG_TIME_D AS ZIG_PRICETEK, ZIGI_KONEC.ZIG_TIME_D AS ZIG_KONEC FROM (ZIGI_OBDELANI INNER JOIN ZIGI ZIGI_PRICETEK ON ZIGI_OBDELANI.ZIG_ID_PRICETEK = ZIGI_PRICETEK.ZIG_ID) INNER JOIN ZIGI ZIGI_KONEC ON ZIGI_OBDELANI.ZIG_ID_KONEC = ZIGI_KONEC.ZIG_ID WHERE (ZIGI_OBDELANI.OSE_ID = 10) AND (ZIGI_OBDELANI.ZIO_DATUM = {d '2010-03-30'}) AND (ZIGI_OBDELANI.ZIO_DATUM <= {d '2010-03-30'}) AND (ZIGI_PRICETEK.ZIG_VELJAVEN < 0) AND (ZIGI_KONEC.ZIG_VELJAVEN < 0) ORDER BY ZIGI_OBDELANI.OSE_ID, ZIGI_PRICETEK.ZIG_TIME ASC 3) SELECT STA_ID, SUM(STP_TIME) AS SUM_STP_TIME, COUNT(STA_ID) FROM STANJA_NA_DAN_POSTAVKA WHERE ((STP_DATE={d '2010-03-30'}) AND (STP_DATE<={d '2010-03-30'}) AND (STA_ID=3) AND (OSE_ID=10)) GROUP BY STA_ID 4) SELECT DATUM, TDN_ID, TDN_OPIS, URN_OPIS, MOZNI_PROBLEMI, PRIHOD, ODHOD, OBVEZNOST, ZAKLJUCEVANJE_DATUM FROM OBRACUNAJ_DAN WHERE ((OSE_ID=10) AND (DATUM={d '2010-02-28'}) AND (DATUM<={d '2010-03-30'})) ORDER BY DATUM These SQLs are daily working hours and I got them as is. Also I got Database with it which (as you can see from the SQL-s) is not in Rails conventions. As a P.S.: 1)Things like STP_DATE={d '2010-03-30'}) are of course dates (in Slovenian date notation) and will be replaced with a variable (date), so that the user could choose date from and date to. 2) All of this data will be shown in the same page in the table,so maybe all in one module? Or many?; if this helps, maybe. So can someone help me? Its for my work and its my 1st project and I am a Rails newbie and the bosses are getting inpatient(they are getting quite loud actually) Thank you very very much!

    Read the article

  • DB optimization to use it as a queue

    - by anony
    We have a table called worktable which has some columns(key(primary key), ptime, aname, status, content) we have something called producer which puts in rows in this table and we have consumer which does an order-by on the key column and fetches the first row which has status as 'pending'. The consumer does some processing on this row: 1. updates status to "processing" 2. does some processing using content 3. deletes the row we are facing contention issues when we try to run multiple consumers(probably due to the order-by which does a full table scan)... using Advanced queues would be our next step but before we go there we want to check what is the max throughput we can achieve with multiple consumers and producer on the table. What are the optimizations we can do to get the best numbers possible? Can we do an in-memory processing where a consumer fetches 1000 rows at a time processes and deletes? will that improve? What are other possibilities? partitioning of table? parallelization? Index organized tables?...

    Read the article

  • Help Optimizing MySQL Table (~ 500,000 records).

    - by Pyrite
    I have a MySQL table that collects player data from various game servers (Urban Terror). The bot that collects the data runs 24/7, and currently the table is up to about 475,000+ records. Because of this, querying this table from PHP has become quite slow. I wonder what I can do on the database side of things to make it as optomized as possible, then I can focus on the application to query the database. The table is as follows: CREATE TABLE IF NOT EXISTS `people` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(40) NOT NULL, `ip` int(4) unsigned NOT NULL, `guid` varchar(32) NOT NULL, `server` int(4) unsigned NOT NULL, `date` int(11) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `Person` (`name`,`ip`,`guid`), KEY `server` (`server`), KEY `date` (`date`), KEY `PlayerName` (`name`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COMMENT='People that Play on Servers' AUTO_INCREMENT=475843 ; I'm storying the IPv4 (ip and server) as 4 byte integers, and using the MySQL functions NTOA(), etc to encode and decode, I heard that this way is faster, rather than varchar(15). The guid is a md5sum, 32 char hex. Date is stored as unix timestamp. I have a unique key on name, ip and guid, as to avoid duplicates of the same player. Do I have my keys setup right? Is the way I'm storing data efficient? Here is the code to query this table. You search for a name, ip, or guid, and it grabs the results of the query and cross references other records that match the name, ip, or guid from the results of the first query, and does it for each field. This is kind of hard to explain. But basically, if I search for one player by name, I'll see every other name he has used, every IP he has used and every GUID he has used. <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> Search: <input type="text" name="query" id="query" /><input type="submit" name="btnSubmit" value="Submit" /> </form> <?php if (!empty($_POST['query'])) { ?> <table cellspacing="1" id="1up_people" class="tablesorter" width="300"> <thead> <tr> <th>ID</th> <th>Player Name</th> <th>Player IP</th> <th>Player GUID</th> <th>Server</th> <th>Date</th> </tr> </thead> <tbody> <?php function super_unique($array) { $result = array_map("unserialize", array_unique(array_map("serialize", $array))); foreach ($result as $key => $value) { if ( is_array($value) ) { $result[$key] = super_unique($value); } } return $result; } if (!empty($_POST['query'])) { $query = trim($_POST['query']); $count = 0; $people = array(); $link = mysql_connect('localhost', 'mysqluser', 'yea right!'); if (!$link) { die('Could not connect: ' . mysql_error()); } mysql_select_db("1up"); $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name LIKE \"%$query%\" OR INET_NTOA(ip) LIKE \"%$query%\" OR guid LIKE \"%$query%\")"; $result = mysql_query($sql, $link); if (!$result) { die(mysql_error()); } // Now take the initial results and parse each column into its own array while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } // now for each name, ip, guid in results, find additonal records $people2 = array(); foreach ($people AS $person) { $ip = $person['ip']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (ip = \"$ip\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people2[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people3 = array(); foreach ($people AS $person) { $guid = $person['guid']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (guid = \"$guid\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people3[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people4 = array(); foreach ($people AS $person) { $name = $person['name']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name = \"$name\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people4[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } // Combine people and people2 into just people $people = array_merge($people, $people2); $people = array_merge($people, $people3); $people = array_merge($people, $people4); $people = super_unique($people); foreach ($people AS $person) { $date = ($person['date']) ? date("M d, Y", $person['date']) : 'Before 8/1/10'; echo "<tr>\n"; echo "<td>".$person['id']."</td>"; echo "<td>".$person['name']."</td>"; echo "<td>".$person['ip']."</td>"; echo "<td>".$person['guid']."</td>"; echo "<td>".$person['server']."</td>"; echo "<td>".$date."</td>"; echo "</tr>\n"; $count++; } // Find Total Records //$result = mysql_query("SELECT id FROM 1up_people", $link); //$total = mysql_num_rows($result); mysql_close($link); } ?> </tbody> </table> <p> <?php echo $count." Records Found for \"".$_POST['query']."\" out of $total"; ?> </p> <?php } $time_stop = microtime(true); print("Done (ran for ".round($time_stop-$time_start)." seconds)."); ?> Any help at all is appreciated! Thank you.

    Read the article

  • MS SQL - High performance data inserting with stored procedures

    - by Marks
    Hi. Im searching for a very high performant possibility to insert data into a MS SQL database. The data is a (relatively big) construct of objects with relations. For security reasons i want to use stored procedures instead of direct table access. Lets say i have a structure like this: Document MetaData User Device Content ContentItem[0] SubItem[0] SubItem[1] SubItem[2] ContentItem[1] ... ContentItem[2] ... Right now I think of creating one big query, doing somehting like this (Just pseudo-code): EXEC @DeviceID = CreateDevice ...; EXEC @UserID = CreateUser ...; EXEC @DocID = CreateDocument @DeviceID, @UserID, ...; EXEC @ItemID = CreateItem @DocID, ... EXEC CreateSubItem @ItemID, ... EXEC CreateSubItem @ItemID, ... EXEC CreateSubItem @ItemID, ... ... But is this the best solution for performance? If not, what would be better? Split it into more querys? Give all Data to one big stored procedure to reduce size of query? Any other performance clue? I also thought of giving multiple items to one stored procedure, but i dont think its possible to give a non static amount of items to a stored procedure. Since 'INSERT INTO A VALUES (B,C),(C,D),(E,F) is more performant than 3 single inserts i thought i could get some performance here. Thanks for any hints, Marks

    Read the article

  • Random Alpha numeric generator

    - by AAA
    Hi, I want to give our users in the database a unique alpha-numeric id. I am using the code below, will this always generate a unique id? Below is the old and updated version of the code: New php: // Generate Guid function NewGuid() { $s = strtoupper(md5(uniqid("something",true))); $guidText = substr($s,0,8) . '-' . substr($s,8,4) . '-' . substr($s,12,4). '-' . substr($s,16,4). '-' . substr($s,20); return $guidText; } // End Generate Guid $Guid = NewGuid(); echo $Guid; echo "<br><br><br>"; Old PHP: // Generate Guid function NewGuid() { $s = strtoupper(md5(uniqid("something",true))); $guidText = substr($s,0,8) . '-' . substr($s,8,4) . '-' . substr($s,12,4). '-' . substr($s,16,4). '-' . substr($s,20); return $guidText; } // End Generate Guid $Guid = NewGuid(); echo $Guid; echo "<br><br><br>"; Will this first code guarantee uniqueness?

    Read the article

  • Multiple Foreign keys to a single table and single key pointing to more than one table

    - by user1216775
    I need some suggestions from the database design experts here. I have around six foreign keys into a single table (defect) which all point to primary key in user table. It is like: defect (.....,assigned_to,created_by,updated_by,closed_by...) If I want to get information about the defect I can make six joins. Do we have any better way to do it? Another one is I have a states table which can store one of the user-defined set of values. I have defect table and task table and I want both of these tables to share the common state table (New, In Progress etc.). So I created: task (.....,state_id,type_id,.....) defect(.....,state_id,type_id,...) state(state_id,state_name,...) importance(imp_id,imp_name,...) There are many such common attributes along with state like importance(normal, urgent etc), priority etc. And for all of them I want to use same table. I am keeping one flag in each of the tables to differentiate task and defect. What is the best solution in such a case? If somebody is using this application in health domain, they would like to assign different types, states, importances for their defect or tasks. Moreover when a user selects any project I want to display all the types,states etc under configuration parameters section.

    Read the article

  • What are provenly scalable data persistence solutions for consumer profiles?

    - by Hubbard
    Consumer profiles with analytical scores [ConsumerID, 1..n demographical variables, 1...n analytical scores e.g. "likely to churn" "likely to buy an item 100$ in worth" etc.] have to be possible to query fast if they are to be used in customizing web-sites, consumer communications etc. Well. If you have: Large number of consumers Large profiles with a huge set of variables (as profiles describing human behaviour are likely to be..) ...you are in trouble. If you really have a physical relational database to which you target a query and then a physical disk starts to rotate someplace to give you an individual profile or a set of profiles, the profile user (a web site customizing a page, a recommendation engine making a recommendation..) has died of boredom before getting any observable results. There is the possibility of having the profiles in memory, which would of course increase the performance hugely. What are the most proven solutions for a fast-response, scalable consumer profile storage? Is there a shootout of these someplace?

    Read the article

< Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >