Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 848/2416 | < Previous Page | 844 845 846 847 848 849 850 851 852 853 854 855  | Next Page >

  • Changing the Column order/adding Newcolumn for existing Table in SQLServer2008

    - by rmdussa
    Hi I have situation where I need to change the order of the columns/adding new columns for existing Table in SQLServer2008, It is not allowing me to do without drop and recreate. But That is in production system and having data in that table. I can take backup of the data, and Drop the existing table and change the order/add new columns and recreate it, insert the backup data into new Table. Is there any best way to do this without dropping and recreating.I think SQLServer 2005 will allow this process without dropping and recreating while changing to existing table structure. Thanks

    Read the article

  • How do I automatically rebuild the Sphinx index under django-sphinx?

    - by Apreche
    I just setup django-sphinx, and it is working beautifully. I am now able to search my model and get amazing results. The one problem is that I have to build the index by hand using the indexer command. That means every time I add new content, I have to manually hit the command line to rebuild the search index. That is just not acceptable. I could make a cron job that automatically runs the indexer command every so often, but that's far from optimal. New data won't be indexed until the cron runs again. In addition, the indexer will run unnecessarily most times as my site doesn't have data being added very often. How do I set it up so that the Sphinx index will automatically rebuild itself whenever data is added to or modified in a searchable django model?

    Read the article

  • Is writing eSQL database independent or not?

    - by Robert Koritnik
    Using EF we can use LINQ to read data which is rather simple (especially using fluent calls), but we have less control unless we write eSQL on our own. Is writing eSQL actually data store independent code? So if we decide to change data store, can the same statements still be used? Does writing eSQL strings in your code pose any serious security threats similar to writing TSQL statements as plain strings in C# code? That's why SPs are recommended. Could we still move eSQL scripts outside of code and use some other technique to make them a bit more secure?

    Read the article

  • Issue reading packets from a pcap file. dpkt module. What gives?

    - by Chris
    I am running the following test script to try to read packets from a sample .pcap file I have downloaded. It won't seem to run. I have all of the modules, but no examples seem to be running. import socket import dpkt import sys pcapReader = dpkt.pcap.Reader(file("test1.pcap", "rb")) for ts, data in pcapReader: ether = dpkt.ethernet.Ethernet(data) if ether.type != dpkt.ethernet.ETH_TYPE_IP: raise ip = ether.data src = socket.inet_ntoa(ip.src) dst = socket.inet_ntoa(ip.dst) print "%s -> %s" % (src, dst) For some reason, this is not being interpreted properly. When running it, I get KeyError: 138 module body in test.py at line 4 function __init__ in pcap.py at line 105 Program exited. Why is this? What's wrong?

    Read the article

  • Recording SELECT statements in PostgreSQL 8.4

    - by David Anniwell
    Hi All I've got a table which contains sensitive data and according to data protection policy we have to keep a record of every read/write of the data including a row identifier and the user who accessed the table. The writing is no issue using triggers but obviously triggers aren't supported for SELECT statements. What's the best method of doing this? I've looked at rules but I can't get them to INSERT into a table, and I've tried logging every query but this doesn't seem to log SELECT statements. Ideally for security I'd like to keep the log within a table on the database but logging to a file is fine too. Thanks David

    Read the article

  • How to store user preferences? Cookie becomes bigger..

    - by ari
    My application (Asp.Net MVC) has great interaction with the user interface (jQuery/js). For example, setting various searches charts, moving the gadgets on the screen and more .. I of course want to keep all data for each user. So that data will be available from any page in the Dumaine and the user will accepts his preferences. Now I keep all data in a cookie because it did not seem logical asynchronous access to the server each time the user changes something and thet happens a lot.When the user logout from the application I save the cookie to the database. The problem is the cookie becomes very large. The thought that this huge cookie is attached to each server request makes me feel that my attitude is wrong. Another problem cookies have size limit. It varies from your browser but I definitely have been close to the border - my cookie easily become 4kb Is there another solution?

    Read the article

  • SQL 2008 pivot without aggregate

    - by Bryan Lewis
    I have table to test score data that I need to pivot and I am stuck on how to do it. I have the data as this: gradelistening speaking reading writing 0 0.0 0.0 0.0 0.0 1 399.4 423.8 0.0 0.0 2 461.6 508.4 424.2 431.5 3 501.0 525.9 492.8 491.3 4 521.9 517.4 488.7 486.7 5 555.1 581.1 547.2 538.2 6 562.7 545.5 498.2 530.2 7 560.5 525.8 545.3 562.0 8 580.9 548.7 551.4 560.3 9 602.4 550.2 586.8 564.1 10 623.4 581.1 589.9 568.5 11 633.3 578.3 598.1 568.2 12 626.0 588.8 600.5 564.8 But I need it like this: gr0 gr1 gr2 gr3 gr4 gr5 gr6 gr7 ... listening 0.0 399.4 461.6 501.0 521.9 555.1 562.7 560.5 580.9... speaking 0.0 423.8... reading 0.0 0.0 424.2... writing 0.0 0.0 431.5... I don't need to aggregate anything, just pivot the data.

    Read the article

  • Rails - how can I query the db w/o touching the sessions table

    - by sa125
    Hi - I'm trying to provide a HTTP api to my app that queries a db that's read-only (for replication purposes). I find that my app crashes repeatedly when making a request b/c the call is trying to update the sessions table whenever I query the db. This doesn't happen when I return some text without hitting the database for info. class APIController < AplicationController def view data = Product.find(params[:id]).to_json # will fail data = { :one => 1, :two => 2 }.to_json # will succeed respond_to do |format| format.html { render :json => data } end end end How do I restrict it from touching the sessions table on this request (it's currently issuing an UPDATE on the updated_at field for that session). thanks.

    Read the article

  • Not all parameters get sent in jquery ajax call

    - by rksprst
    I have a strange error where my jquery ajax request doesn't submit all the parameters. $.ajax({ url: "/ajax/doAssignTask", type: 'GET', contentType: "application/json", data: { "just_a_task": just_a_task, "fb_post_date": fb_post_date, "task_fb_postId": task_fb_postId, "sedia_task_guid": sedia_task_guid, "itemGuid": itemGuid, "itemType": itemType, "taskName": taskName, "assignedToUserGuid": assignedToUserGuid, "taskDescription": taskDescription }, success: function(data, status) { //success code }, error: function(xhr, desc, err) { //error code } }); But using firebug (and debugging) I can see that only these variables are posted: assignedToUserGuid itemGuid itemType just_a_task taskDescription taskName It's missing fb_post_date, task_fb_postId, and sedia_task_guid I have no idea what would cause it to post only some items and not others? Anyone know? Data is sent to asp.net controller that returns jsonresult (hence the contentType) Any help is appreciated. Thanks!

    Read the article

  • High density Silverlight charting control

    - by ahosie
    I've been looking into Silverlight charting controls to display a large number of samples, (~10,000 data points in five separate series - ~50k points all up). I have found the existing options produced by Dundas, Visifire, Microsoft etc to be extremely poor performers when displaying more than a few hundred data points. I believe the performance issues with existing chart controls is caused by the heavy use of vector graphics. Ergo one solution would be a client-side chart control that uses the WritableBitmap class to generate a raster chart. Before I fall too far down the wheel re-invention rabbit hole - has anyone found a third party or OSS control that will manage large numbers of data points on a sparkline?

    Read the article

  • jquery change attribute

    - by junray
    hi, i have 4 links and i need to change the href attribute in a rel attribute. i know i cannot do it so i'm trying to get the data from the href attribute, setting a new attribute (rel), inserting the data inside it and then removing the href attibute. basically i'm doing this: $('div#menu ul li a').each(function(){ var lin = $(this).attr('href'); $('div#menu ul li a').attr('rel',lin); $(this).removeAttr('href'); }) }) it works but it sets the same rel data in every link i have. any help? thnx

    Read the article

  • Providing multi-version databases for backward compatibility for production applications/databases.

    - by JavaRocky
    How can I manage multiple versions of a database easily? I have some data (as views as selects for data originating in tables from other schemas), which other database may reference using various means including database synonyms & links. I wish to provide a sort of interface/guarantee in-case future for applications/databases which use this data. All of this is for in the event i need to update the views for correctness or applicability inside my database. How can i achieve this in a maintained, controlled and easy way? I am using Oracle 10g if that matters.

    Read the article

  • PHP Tag Cloud System

    - by deniz
    Hi, I am implementing a tag cloud system based on this recommendation. (However, I am not using foreign keys) I am allowing up to 25 tags. My question is, how can I handle editing on the items? I have this item adding/editing page: Title: Description: Tags: (example data) computer, book, web, design If someone edits an item details, do I need to delete all the tags from Item2Tag table first, then add the new elements? For instance, someone changed the data to this: Tags: (example data) computer, book, web, newspaper Do I need to delete all the tags from Item2Tag table, and then add these elements? It seems inefficient, but I could find a more efficient way. The other problem is with this way is, if someone edits description but does not change the tags box, I still need to delete all the elements from Item2Tag table and add the same element. I am not an experienced PHP coder, so could you suggest a better way to handle this? (pure PHP/MySQL solution is preferable) Thanks in advance,

    Read the article

  • I would like to build an app that alerts me when road traffic is high or low. Where can I get the r

    - by MedicineMan
    I'm sitting at work, waiting for traffic to die down. The thought occurred to me. I know when I want to go home, why don't I have an app that watches traffic for me? I also know that there are a lot of smart people on stackoverflow. Where can I get live traffic data for the san francisco bay area region? The data source should be timely, accurate, and as high resolution as possible. I would like to build an app on top of a service, rather than watch google maps or watch another website. I would prefer that I not have to scrape the data, but I have been know to do this in the past when no other option exists.

    Read the article

  • Ajax loader image: How to set a minimum duration during which the loader is shown?

    - by whamsicore
    I am using Jquery for Ajax functionality, and using a loader icon to indicate to the user that data is being retrieved. However, I want the user to see the loader icon for at least 1s, even if the data takes less than 1s to retrieve (if more than 1s is required, the loader icon should remain for the entire duration. Here is the code for the loader HTML <img id="loader" src="example.com/images/ loader.gif" style="vertical-align: middle; display: none" /> I am using the Jquery .Ajax function for my data processing.

    Read the article

  • Threading in C#

    - by Lalit Dhake
    Hi, I have console application. In that i have some process that fetch the data from database through different layers ( business and Data access). and generate outputs. This process will execute many times say 10 times. Ok? I want to run simultaneously this process. not one will start, it will finish and then second will start. I want after starting 1'st process, just 2'nd , 3rd....10'th must be start. Means it should be multithreading. how can i achieve this ? is that will give me error while connection with data base open and close ?

    Read the article

  • Django syncdb not making tables for my app

    - by Rosarch
    It used to work, and now it doesn't. python manage.py syncdb no longer makes tables for my app. From settings.py: INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'mysite.myapp', 'django.contrib.admin', ) What could I be doing wrong? The break appeared to coincide with editing this model in models.py, but that could be total coincidence. I commented out the lines I changed, and it still doesn't work. class MyUser(models.Model): user = models.ForeignKey(User, unique=True) takingReqSets = models.ManyToManyField(RequirementSet, blank=True) takingTerms = models.ManyToManyField(Term, blank=True) takingCourses = models.ManyToManyField(Course, through=TakingCourse, blank=True) school = models.ForeignKey(School) # minCreditsPerTerm = models.IntegerField(blank=True) # maxCreditsPerTerm = models.IntegerField(blank=True) # optimalCreditsPerTerm = models.IntegerField(blank=True) UPDATE: When I run python manage.py loadddata initial_data, it gives an error: DeserializationError: Invalid model identifier: myapp.SomeModel Loading this data had worked fine before. This error is thrown on the very first data object in the data file.

    Read the article

  • Unaccounted for database size

    - by Nazadus
    I currently have a database that is 20GB in size. I've run a few scripts which show on each tables size (and other incredibly useful information such as index stuff) and the biggest table is 1.1 million records which takes up 150MB of data. We have less than 50 tables most of which take up less than 1MB of data. After looking at the size of each table I don't understand why the database shouldn't be 1GB in size after a shrink. The amount of available free space that SqlServer (2005) reports is 0%. The log mode is set to simple. At this point my main concern is I feel like I have 19GB of unaccounted for used space. Is there something else I should look at? Normally I wouldn't care and would make this a passive research project except this particular situation calls for us to do a backup and restore on a weekly basis to put a copy on a satellite (which has no internet, so it must be done manually). I'd much rather copy 1GB (or even if it were down to 5GB!) than 20GB of data each week. sp_spaceused reports the following: Navigator-Production 19184.56 MB 3.02 MB And the second part of it: 19640872 KB 19512112 KB 108184 KB 20576 KB while I've found a few other scripts (such as the one from two of the server database size questions here, they all report the same information either found above or below). The script I am using is from SqlTeam. Here is the header info: * BigTables.sql * Bill Graziano (SQLTeam.com) * graz@<email removed> * v1.11 The top few tables show this (table, rows, reserved space, data, index, unused, etc): Activity 1143639 131 MB 89 MB 41768 KB 1648 KB 46% 1% EventAttendance 883261 90 MB 58 MB 32264 KB 328 KB 54% 0% Person 113437 31 MB 15 MB 15752 KB 912 KB 103% 3% HouseholdMember 113443 12 MB 6 MB 5224 KB 432 KB 82% 4% PostalAddress 48870 8 MB 6 MB 2200 KB 280 KB 36% 3% The rest of the tables are either the same in size or smaller. No more than 50 tables. Update 1: - All tables use unique identifiers. Usually an int incremented by 1 per row. I've also re-indexed everything. I ran the dbcc shrink command as well as updating the usage before and after. And over and over. An interesting thing I found is that when I restarted the server and confirmed no one was using it (and no maintenance procs are running, this is a very new application -- under a week old) and when I went to run the shrink, every now and then it would say something about data changed. Googling yielded too few useful answers with the obvious not applying (it was 1am and I disconnected everyone, so it seems impossible that was really the case). The data was migrated via C# code which basically looked at another server and brought things over. The quantity of deletes, at this point in time, are probably under 50k in rows. Even if those rows were the biggest rows, that wouldn't be more than 100M I would imagine. When I go to shrink via the GUI it reports 0% available to shrink, indicating that I've already gotten it as small as it thinks it can go. Update 2: sp_spaceused 'Activity' yields this (which seems right on the money): Activity 1143639 134488 KB 91072 KB 41768 KB 1648 KB Fill factor was 90. All primary keys are ints. Here is the command I used to 'updateusage': DBCC UPDATEUSAGE(0); Update 3: Per Edosoft's request: Image 111975 2407773 19262184 It appears as though the image table believes it's the 19GB portion. I don't understand what this means though. Is it really 19GB or is it misrepresented? Update 4: Talking to a co-worker and I found out that it's because of the pages, as someone else here has also state the potential for that. The only index on the image table is a clustered PK. Is this something I can fix or do I just have to deal with it? The regular script shows the Image table to be 6MB in size. Update 5: I think I'm just going to have to deal with it after further research. The images have been resized to be roughly 2-5KB each and on a normal file system doesn't consume much space but on SqlServer it seems to consume considerably more. The real answer, in the long run, will likely be separating that table in to another partition or something similar.

    Read the article

  • Representing a 16 byte variable

    - by Bobby
    I have to represent a 16 byte field as part of a data structure: struct Data_Entry { uint8 CUI_Type; uint8 CUI_Size; uint16 Src_Refresh_Period; uint16 Src_Buffer_Size; uint16 Src_CUI_Offset; uint32 Src_BCW_Address; uint32 Src_Previous_Timestamp; /* The field below should be a 16 byte field */ uint32 Data; }; How would I represent the "Data" field as a 16 byte field instead of the 4 byte field it currently is? Thanks, Bobby

    Read the article

  • How to encrypt the mobile configuration profiles in iOS (in OTA deployments)?

    - by user1353477
    I am trying to sign and encrypt .mobileconfig profiles for iOS devices. Signing works perfectly using openssl::pkcs7 sign function in ruby, however using encrypt function, I get an encrypted data but Safari fails to install the profile saying "Invalid Profile". There are two questions in this regard: Which data from the .mobileconfig profile is actually encrypted that goes into the .. section of the EncryptedPayloadContent ? Is the data in binary format (.der) or base64 encoded? Any help in this regard would be helpful as APPLE severly lacks any documentation in encrypting the profiles.

    Read the article

  • OOP Design Question - Where/When do you Validate properties?

    - by JW
    I have read a few books on OOP DDD/PoEAA/Gang of Four and none of them seem to cover the topic of validation - it seems to be always assumed that data is valid. I gather from the answers to this post (http://stackoverflow.com/questions/1651964/oop-design-question-validating-properties) that a client should only attempt to set a valid property value on a domain object. This person has asked a similar question that remains unanswered: http://bytes.com/topic/php/answers/789086-php-oop-setters-getters-data-validation#post3136182 So how do you ensure it is valid? Do you have a 'validator method' alongside every getter and setter? isValidName() setName() getName() I seem to be missing some key basic knowledge about OOP data validation - can you point me to a book that covers this topic in detail? - ie. covering different types of validation / invariants/ handling feedback / to use Exceptions or not etc

    Read the article

  • Wiki Database, is there one?

    - by Faiz
    I was searching the net for something like a wiki database, just like wikipedia but instead stores structured content, editable by users. What I was looking for was an online database accessible by everyone where people can design the schema and data with proper versioning of both schema and data. I couldn't find any such site. I am not sure if it is my search skills or if there really is no wiki database as of now. Does anyone out there know anything like this? I think there is a great potential for something like this. A possible example will be a website with a GUI for querying a MySQL DB where any website visitor can create DB objects and populate data.

    Read the article

  • Jagged Edge Arrays in PHP

    - by chriscct7
    I want to store some data in (I guess a semi-2, semi-3d array) in PHP (5.3) What I need to do is store data about each floor like this: Floor Num of Spots Handicap Motorcyle Other 1 100 array(15,16,17) array (47,62) array (99,100) 2 100 array(15,16,17) array (47,62) array (99,100) and on The problem is, is if the Handicap+Motorcyle+Other were ints, I could just store the data in a 2d array. However, they aren't. So I was thinking I could make something almost like a 3D array, with the first two columns only being in 2D. The other thought I had was making a 2D array and for columns 3,4, and 5 instead of saving as array(15,16) //save like 1516 And then split at two digits (1 digit array numbers would be prefaced with a 0). However, I am wondering about the limit of the length of a string, because if I decide to move to a 3 digit length number in the array, like array(100, 104), and I need to store alot of numbers, I am thinking I am going to quickly exceed the max.

    Read the article

  • Appscript to write iTunes artwork

    - by Kartik Aiyer
    I'm trying to capture artwork from a pict file and embed into a track on iTunes using python appscript. I did something like this: imFile = open('/Users/kartikaiyer/temp.pict','r') data = imFile.read() it = app('iTunes') sel = it.current_track.get() sel.artworks[0].data_.set(data[513:]) I get an error OSERROR: -1731 MESSAGE: Unknown object Similar applescript code looks like this: tell application "iTunes" set the_artwork to read (POSIX file "/Users/kartikaiyer/temp.pict") from 513 as picture set data of artwork 1 of current track to the_artwork end tell I tried using ASTranslate but it never instantiates 'the_artwork' and then throws an error when there is a reference to the_artwork. Can anyone help. I'm new to appscript and python in general.

    Read the article

  • Jquery getJSON Not Working Cross Site

    - by CJ
    I have a piece of javascript that grabs JSON data. When executed locally everything seems to work fine. However, when I try accessing it from a different site, it doesn't work. Here's the script. $(function(){ var aT = new AjaxTest(); aT.getJson(); }); var AjaxTest = function() { this.ajaxUrl = "http://mydeveloperpage.com/sandbox/ajax_json_test/client_reciever.php"; this.getJson = function(){ $.getJSON(this.ajaxUrl, function(data){ $.each(data, function(i, piece){ alert(piece); }); }); } } You can find a copy of the exact same file at "http://mydeveloperpage.com/sandbox/ajax_json_test/". Any help would be greatly appreciated. Thanks!

    Read the article

< Previous Page | 844 845 846 847 848 849 850 851 852 853 854 855  | Next Page >