Search Results

Search found 10966 results on 439 pages for 'kevin db'.

Page 333/439 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • Best way to correct garbled data caused by false encoding

    - by ercan
    Hi all, I have a set of data that contains garbled text fields because of encoding errors during many import/exports from one database to another. Most of the errors were caused by converting UTF-8 to ISO-8859-1. Strangely enough, the errors are not consistent: the word 'München' appears as 'München' in some place and as 'MÜnchen'. Is there a trick in SQL server to correct this kind of crap? The first thing that I can think of is to exploit the COLLATE clause, so that ü is interpreted as ü, but I don't exactly know how. If it isn't possible to make it in the DB level, do you know any tool that helps for a bulk correction? (no manual find/replace tool, but a tool that guesses the garbled text somehow and correct them)

    Read the article

  • SQL Compare-Like tool for Oracle?

    - by Hitchhiker
    We're a .NET team which uses the Oracle DB for a lot of reasons that I won't get into. But deployment has been a bitch. We are manually keeping track of all the changes to the schema in each version, by keeping a record of all the scripts that we run during development. Now, if a developer forgets to check-in his script to the source control after he ran it - which is not that rare - at the end of the iteration we get a great big headache. I hear that SQL Compare by Red-Gate might solve these kind of issues, but it only has SQL Server support. Anybody knows of a similar tool for Oracle? I've been unable to find one.

    Read the article

  • Serving large generated files using Google App Engine?

    - by John Carter
    Hiya, Presently I have a GAE app that does some offline processing (backs up a user's data), and generates a file that's somewhere in the neighbourhood of 10 - 100 MB. I'm not sure of the best way to serve this file to the user. The two options I'm considering are: Adding some code to the offline processing code that 'spoofs' it as a form upload to the blob store, and going thru the normal blobstore process to serve the file. Having the offline processing code store the file somewhere off of GAE, and serving it from there. Is there a much better approach I'm overlooking? I'm guessing this is functionality that isn't well suited to GAE. I had thought of storing in the datastore as db.Text or Dd.Blob but there I encounter the 1 MB limit. Any input would be appreciated,

    Read the article

  • Problems inserting file data into sqlite database using python

    - by tylerc230
    I'm trying to open an image file in python and add that data to an sqlite table. I created the table using: "CREATE TABLE "images" ("id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL , "description" VARCHAR, "image" BLOB );" I am trying to add the image to the db using: imageFile = open(imageName, 'rb') b = sqlite3.Binary(imageFile.read()) targetCursor.execute("INSERT INTO images (image) values(?)", (b,)) targetCursor.execute("SELECT id from images") for id in targetCursor: imageid= id[0] targetCursor.execute("INSERT INTO %s (questionID,imageID) values(?,?)" % table, (questionId, imageid)) When I print the value of 'b' it looks like binary data but when I call: 'select image from images where id = 1' I get '????' printed to the console. Anyone know what I'm doing wrong?

    Read the article

  • How do you make your Java application memory efficient?

    - by Boune
    How do you optimize the heap size usage of an application that has a lot (millions) of long-lived objects? (big cache, loading lots of records from a db) Use the right data type Avoid java.lang.String to represent other data types Avoid duplicated objects Use enums if the values are known in advance Use object pools String.intern() (good idea?) Load/keep only the objects you need I am looking for general programming or Java specific answers. No funky compiler switch. Edit: Optimize the memory representation of a POJO that can appear millions of times in the heap. Use cases Load a huge csv file in memory (converted into POJOs) Use hibernate to retrieve million of records from a database Resume of answers: Use flyweight pattern Copy on write Instead of loading 10M objects with 3 properties, is it more efficient to have 3 arrays (or other data structure) of size 10M? (Could be a pain to manipulate data but if you are really short on memory...)

    Read the article

  • Hibernate / MySQL Bulk insert problem

    - by Marty Pitt
    I'm having trouble getting Hibernate to perform a bulk insert on MySQL. I'm using Hibernate 3.3 and MySQL 5.1 At a high level, this is what's happening: @Transactional public Set<Long> doUpdate(Project project, IRepository externalSource) { List<IEntity> entities = externalSource.loadEntites(); buildEntities(entities, project); persistEntities(project); } public void persistEntities(Project project) { projectDAO.update(project); } This results in n log entries (1 for every row) as follows: Hibernate: insert into ProjectEntity (name, parent_id, path, project_id, state, type) values (?, ?, ?, ?, ?, ?) I'd like to see this get batched, so the update is more performant. It's possible that this routine could result in tens-of-thousands of rows generated, and a db trip per row is a killer. Why isn't this getting batched? (It's my understanding that batch inserts are supposed to be default where appropriate by hibernate).

    Read the article

  • Passing extended parameter into Sql 2008 connection string

    - by Pita.O
    Hi, I have a need to support extensive auditing capabilities for a system backing into Sql Server 2008. Since I plan to use LINQ (with no Stored Procs), the database would be a clean, zero contact data repository. However, I need to pratically record a snapshot of every change that happens in the db. So, I thought I should use triggers. But then, I need a user id for the particular user (not the connection string user id) to flow through into the database. In oracle, I should have been able to set up a PROXY USER and the trigger would be able to pick that up. Last I checked, there was no proxy user concept in Sql Server. Does anyone know if there's any extender property I can use to flow through my authenticated user name? ps: I don't mind the impact on connection pooling (if any). Thanks. P

    Read the article

  • Disable Primary Key and Re-Enable After SQL Bulk Insert

    - by Jon
    I am about to run a massive data insert into my DB. I have managed to work out how to enable and rebuild non-clustered indexes on my tables but I also want to disable/enable primary keys. You can't disable the clustered index for the primary key as the table is inaccessible when that is done and my attempt to do a ALTER TABLE for constraints does not work as I think that is only for foreign keys. Do you know of a way to Disable the Primary Key and Re-Enable After SQL Bulk Insert. NOTE: This is over numerous tables and so I don't know the exact primary key specifications eg/name etc

    Read the article

  • [jquery] Appending to Second Last element

    - by Shishant
    Hello, This is the final output My HTML <li id='$id'>TEXT <ul class='indent'> <li id='$id'>TEXT</li> <li id='$id'>TEXT</li> <li class='formContainer'> FORM </li> </ul> </li> I want to append a li between form after all other li So in this example new li will be appended between Test141 APPEND Input Box The $id are db ids of li which are unique

    Read the article

  • Fastest way to convert file from latin1 to utf-8 in python.

    - by xsaero00
    I need fastest way to convert files from latin1 to utf-8 in python. The files are large ~ 2G. ( I am moving DB data ). So far I have import codecs infile = codecs.open(tmpfile, 'r', encoding='latin1') outfile = codecs.open(tmpfile1, 'w', encoding='utf-8') for line in infile: outfile.write(line) infile.close() outfile.close() but it is still slow. The conversion takes one fourth of the whole migration time. I could also use a linux command line utility if it is faster than native python code.

    Read the article

  • asp.net app slow and not rendering but .txt file works fine?

    - by mike
    i have a load balanced set of 2 servers running a .net application that is loading slow and not at all to the point the load balancer is redirecting me and my users to our dr server. i added a hard coded file to both servers to tell what server is working and not. both pull the txt file and display its contents in no time. however the asp.net pages dont render. the DB is not sweating and even a plain .aspx page doesnt come up. any thoughts on trouble shooting this?

    Read the article

  • Transfer Core Data from One Project to Another

    - by Michael
    The answer is probably a resounding 'NO' but before I start a new project from scratch, I thought I'd ask. I create many throw away projects to test ideas and code before combining all the successful bits from the scratch projects into a final version. So I have one project with the Core Data stuff worked out but I want to move it to a new project. My guess is that there is too many internal hooks and dropping in the .xcdatamodel and the sqlite db is just not going to work. I'd glad to be wrong...

    Read the article

  • Are SqlCipher open cursors a security concern?

    - by user1178479
    I'm using SqlCipher with content providers. Right now, when I want to lock the app I just clear out the cached password. However, the app can continue to work with any open cursors. This means that re-opening the app grants access to the sensitive data. I fix this issue on the surface by redirecting to a login screen if the app doesn't have passwords. However, I'm concerned if there are any security issues with these open cursors or if I should just continue to block UI access and not worry? SqlCipher's docs say that it reads/writes encrypted pages on the fly, as opposed to decrypting the entire DB, this makes me think that open cursors are still secure. The main concern here is that someone loses their phone and then a knowledgeable individual can use these open cursors to extract sensitive data.

    Read the article

  • Is it possible to run a compiled program with Xcode on Mac OS X in FreeBSD? (Objective-C/Cocoa)

    - by Eonil
    Hi. I have a plan to build a web-site which running CGI made with Cocoa. My final goal is develop on Mac OS X, and run on FreeBSD. Is this possible? As I know, there is a free implementation of some NextStep classes, the GNUStep. The web-site is almost built with only strings. I read GNUStep documents, classes are enough. DB connection will be made with C interfaces. Most biggest problem which I'm concerning is linking and binary compatibility. I'm currently configuring FreeBSD on VirtualBox, but I wanna know any possibility informations about this from experts. This is not a production server. Just a trial. Please feel free to saying anything.

    Read the article

  • MySQL - How do I insert an additional where clause into this full-text search query

    - by Steven
    I want to add a WHERE clause to a full text search query (to limit to past 24 hours), but wherever I insert it I get Low Level Error. Is it possible to add the clause and if so, how? $query = "SELECT * WHERE story_time > time()-86400 AND MATCH (story_title) AGAINST ('".validate_input($_GET['q'])."' IN BOOLEAN MODE) AS Relevance FROM ".$config['db']['pre']."stories WHERE MATCH (story_title) AGAINST ('+".validate_input($_GET['q'])."' IN BOOLEAN MODE) HAVING Relevance > 0.2 ORDER BY Relevance DESC, story_time DESC LIMIT ".validate_input(($_GET['page']-1)*10).",10";

    Read the article

  • How can I get my div id to reload via ajax and jquery

    - by Matt Nathanson
    I'm creating a CMS using jQuery and AJAX. When I click, my "Add Campaign" buttom, it creates a unique client ID in the DB and on a hard reload, the new client shows up in its container. I am trying to use ajax to reload the container on the fly and I'm not having the exact luck i am hoping for. I can get it to reload, but it's like it's pulling in descriptions of each of the clients as well! function AddNewClient() { dataToLoad = 'clientID=' + clientID + '&addClient=yes'; $.ajax({ type: 'post', url: '/clients/controller.php', datatype: 'html', data: dataToLoad, target: ('#clientssidebar'), async: false, success: function(html){ $('#clientssidebar').html(html); }, error: function() { alert('An error occured!'); } }); };

    Read the article

  • how to get count column

    - by soclose
    Hi I'd like to get the value of Count column from cursor. public Cursor getRCount(String iplace) throws SQLException { try { String strSql = "SELECT COUNT(_id) AS RCount FROM tbName WHERE place= '" + iplace + "'"; return db.rawQuery(strSql, null); } catch (SQLException e) { Log.e("Exception on query", e.toString()); return null; } } I tried to get this count column value from cursor like below Cursor cR = mDbHelper.getRCount(cplace);if (cR.getCount() > 0){long lCount = cR.getLong(0);}cR.close();} I got the debug error. How to get it?

    Read the article

  • How to remove duplicate records in a table?

    - by Mason Wheeler
    I've got a table in a testing DB that someone apparently got a little too trigger-happy on when running INSERT scripts to set it up. The schema looks like this: ID UNIQUEIDENTIFIER TYPE_INT SMALLINT SYSTEM_VALUE SMALLINT NAME VARCHAR MAPPED_VALUE VARCHAR It's supposed to have a few dozen rows. It has about 200,000, most of which are duplicates in which TYPE_INT, SYSTEM_VALUE, NAME and MAPPED_VALUE are all identical and ID is not. Now, I could probably make a script to clean this up that creates a temporary table in memory, uses INSERT .. SELECT DISTINCT to grab all the unique values, TRUNCATE the original table and then copy everything back. But is there a simpler way to do it, like a DELETE query with something special in the WHERE clause?

    Read the article

  • Image Upload directly from client to remote server? Spring/Tomcat

    - by Prem
    Just wondering what the common solution is for this. We have two web servers that are load balanced and a separate server that holds our images. Our current process is that a user uploads an image directly to the web server (which ever they are connected to) and we enter a job into our DB. Another process checks for image jobs every few mins and copies the image from the web server up to the image server. The delay from when a user uploads to when its visible is not ideal. We could tighten the loop on how often we check for image jobs but ideally I would like to have user uploaded images to go directly to the image server rather than copying twice. How should this be done? Is there anything in spring to deal with this ? Seems like how most would deal with a CDN i would think? I want to limit the time it takes for an image that a user uploads is available on our site...

    Read the article

  • Zend Framework Relations vs. Table Select

    - by rtmilker
    Hey! I just want to know your guys opinion on using join tables within the zend framework. Of course you can use relations by defining a referenceMap and dependentTables and stuff, or using setIntegrityCheck(false) within a db select(). The setIntegrityCheck version seems a little bit dirty to me, but the other version is not very suitable for big querys and joining many tables... I'm a PHP developer for 5 years now and new to the zend framework and just want get a direction for my first project. Thanks!!!

    Read the article

  • Model in sub-directory via app_label?

    - by prometheus
    In order to place my models in sub-folders I tried to use the app_label Meta field as described here. My directory structure looks like this: project apps foo models _init_.py bar_model.py In bar_model.py I define my Model like this: from django.db import models class SomeModel(models.Model): field = models.TextField() class Meta: app_label = "foo" I can successfully import the model like so: from apps.foo.models.bar_model import SomeModel However, running: ./manage.py syncdb does not create the table for the model. In verbose mode I do see, however, that the app "foo" is properly recognized (it's in INSTALLED_APPS in settings.py). Moving the model to models.py under foo does work. Is there some specific convention not documented with app_label or with the whole mechanism that prevents this model structure from being recognized by syncdb?

    Read the article

  • Django. default=datetime.now() problem

    - by Shamanu4
    Hello. I've such db model: from datetime import datetime class TermPayment(models.Model): dev_session = models.ForeignKey(DeviceSession, related_name='payments') user_session = models.ForeignKey(UserSession, related_name='payment') date = models.DateTimeField(default=datetime.now(),blank=True) sum = models.FloatField(default=0) cnt = models.IntegerField(default=0) class Meta: db_table = 'term_payments' ordering = ['-date'] and here new instance is added: # ... tp = TermPayment() tp.dev_session = self.conn.session # device session hash tp.user_session = self.session # user session hash tp.sum = sum tp.cnt = cnt tp.save() But i've a problem: all records in database have the same value in date field - the date of the first payment. After server restart - one record have new date and others have the same as first after restart. It's look like some data cache is using but I can't found where. database: mysql 5.1.25 django v1.1.1

    Read the article

  • MYSQL trigger gets deleted automatically

    - by Mirage
    I have using mysql 5.1 with cpanel /whm centOS. I had to use trigger for one of my website. so i installed trigger as root so that when something gets inserted on one table there some more rows gets inserted in other table Everything was working fine, but i have seen that there is no trigger in my dtabase. How does that be deleted from DB. I am bit worried because currently site is not live , but it can cause problem if this happens in live site. Does any mysql updation cause the trigger to delete. but i have no updated. How can i make sure it don't happen in future Thanks

    Read the article

  • Easy plugin or procedure for sqlserver Management Studio to script row inserts.

    - by Patrick Karcher
    I've never been able to find a good script or plugin for sql server Management Studio (2005 and or 2008) for a very common scripting need: specifying a few/all rows in a table and scripting their insert. You can guess my story: I've got some configuration data in my dev db and I need to script it for deployment to UAT and then production. I've found a few cludgy systems in the past, that were more trouble than they were worth. I need something free and unobtrusive. Once I find it I'll share it with the other 20 developers in my shop who are annoyed by this. Aren't we all annoyed by this by the way? What is the best, easiest, free, way to specify a few/all rows in a table and get a script their insert?

    Read the article

  • how to get the individual parameters from the list of dynamic parameters in a webmethod,sent using

    - by kranthi
    hi, I am using jquery ajax method on my aspx page,which will invoke the webmethod in the code behind.Currently the webmethod takes a couple of parameters like firstname,lastname,address etc which I am passing from jquery ajax method using data:JSON.stringify({fname:firstname,lname:lastname,city:city}) now my requirement has been changed such that,the number and type of parameters that are going to be passed is not fixed for ex.parameter combination can be something like fname,city or fname,city or city,lname or fname,lname,city or something else.So the webmethod should be such that it should accept any number parameters.I thought of using arrays to do so, as described here. But I do not understand how can I identify which and how many parameters have been passed to the webmethod to insert/update the data to the DB.Please could someone help me with this? thanks

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >