Search Results

Search found 21589 results on 864 pages for 'primary key'.

Page 617/864 | < Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >

  • SqlBulkCopy slow as molasses

    - by Chris
    I'm looking for the fastest way to load bulk data via c#. I have this script that does the job but slow. I read testimonies that SqlBulkCopy is the fastest. 1000 records 2.5 seconds. files contain anywhere near 5000 records to 250k What are some of the things that can slow it down? Table Def: CREATE TABLE [dbo].[tempDispositions]( [QuotaGroup] [varchar](100) NULL, [Country] [varchar](50) NULL, [ServiceGroup] [varchar](50) NULL, [Language] [varchar](50) NULL, [ContactChannel] [varchar](10) NULL, [TrackingID] [varchar](20) NULL, [CaseClosedDate] [varchar](25) NULL, [MSFTRep] [varchar](50) NULL, [CustEmail] [varchar](100) NULL, [CustPhone] [varchar](100) NULL, [CustomerName] [nvarchar](100) NULL, [ProductFamily] [varchar](35) NULL, [ProductSubType] [varchar](255) NULL, [CandidateReceivedDate] [varchar](25) NULL, [SurveyMode] [varchar](1) NULL, [SurveyWaveStartDate] [varchar](25) NULL, [SurveyInvitationDate] [varchar](25) NULL, [SurveyReminderDate] [varchar](25) NULL, [SurveyCompleteDate] [varchar](25) NULL, [OptOutDate] [varchar](25) NULL, [SurveyWaveEndDate] [varchar](25) NULL, [DispositionCode] [varchar](5) NULL, [SurveyName] [varchar](20) NULL, [SurveyVendor] [varchar](20) NULL, [BusinessUnitName] [varchar](25) NULL, [UploadId] [int] NULL, [LineNumber] [int] NULL, [BusinessUnitSubgroup] [varchar](25) NULL, [FileDate] [datetime] NULL ) ON [PRIMARY] and here's the code private void BulkLoadContent(DataTable dt) { OnMessage("Bulk loading records to temp table"); OnSubMessage("Bulk Load Started"); using (SqlBulkCopy bcp = new SqlBulkCopy(conn)) { bcp.DestinationTableName = "dbo.tempDispositions"; bcp.BulkCopyTimeout = 0; foreach (DataColumn dc in dt.Columns) { bcp.ColumnMappings.Add(dc.ColumnName, dc.ColumnName); } bcp.NotifyAfter = 2000; bcp.SqlRowsCopied += new SqlRowsCopiedEventHandler(bcp_SqlRowsCopied); bcp.WriteToServer(dt); bcp.Close(); } }

    Read the article

  • HTML form with multiple submit options

    - by phimuemue
    Hi, I'm trying to create a small web app that is used to remove items from a MySQL table. It just shows the items in a HTML table and for each item a button [delete]: item_1 [delete] item_2 [delete] ... item_N [delete] To achieve this, I dynamically generate the table via PHP into a HTML form. This form has then obviously N [delete]-buttons. The form should use the POST-method for transfering data. For the deletion I wanted to submit the ID (primary key in the MySQL table) of the corresponding item to the executing php skript. So I introduced hidden fields (all these fields have the name='ID' that store the ID of the corresponding item. However, when pressing an arbitrary [delete], it seems to submit always just the last ID (i.e. the value of the last ID hidden field). Is there any way to submit just the ID field of the corresponding item without using multiple forms? Or is it possible to submit data from multiple forms with just one submit-button? Or should I even choose any completly different way? The point why I want to do it in just one single form is that there are some "global" parameters that shall not be placed next to each item, but just once for the whole table.

    Read the article

  • Optimizing MySql query to avoid using "Using filesort"

    - by usef_ksa
    I need your help to optimize the query to avoid using "Using filesort".The job of the query is to select all the articles that belongs to specific tag. The query is: "select title from tag,article where tag='Riyad' AND tag.article_id=article.id order by tag.article_id". the tables structure are the following: Tag table CREATE TABLE `tag` ( `tag` VARCHAR( 30 ) NOT NULL , `article_id` INT NOT NULL , INDEX ( `tag` ) ) ENGINE = MYISAM ; Article table CREATE TABLE `article` ( `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY , `title` VARCHAR( 60 ) NOT NULL ) ENGINE = MYISAM Sample data INSERT INTO `article` VALUES (1, 'About Riyad'); INSERT INTO `article` VALUES (2, 'About Newyork'); INSERT INTO `article` VALUES (3, 'About Paris'); INSERT INTO `article` VALUES (4, 'About London'); INSERT INTO `tag` VALUES ('Riyad', 1); INSERT INTO `tag` VALUES ('Saudia', 1); INSERT INTO `tag` VALUES ('Newyork', 2); INSERT INTO `tag` VALUES ('USA', 2); INSERT INTO `tag` VALUES ('Paris', 3); INSERT INTO `tag` VALUES ('France', 3);

    Read the article

  • Generic unit test scheduling

    - by Raphink
    Hello, I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python. Here is what I need to do: I have a list of tests, with the following attributes: uri: a URI to test (could be HTTP/HTTPS/SSH/local) ; depends: an associative array of tests/values that this test depends on ; join: a list of DB joints to be added when selecting items to process in this test ; depends_db: additional conditions to add to the DB request when selecting items to process in this test. The program builds a dependency tree, beginning with the tests that have no dependencies ; for each test: a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ; the list of items is sent to the URI (using POST or stdin) ; the result is retrived as a YAML file listing the state and comments for the test for each tested item ; the results are stored in the DB ; the test returns, allowing depending tests to be performed. the program generates reports (CSV, DB, graphviz) of the performed tests. The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry. Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • code igniter codeigniter question, making anchor load page containing data from referenced row in DB

    - by thrice801
    Hi, Im trying to learn the code igniter library and object oriented php in general and have a question. Ok so Ive gotten as far as making a page which loads all of the rows from my database and in there, Im echoing an anchor tag which is a link to the following structure. [code]echo anchor("videos/video/$row-video_id", $row-video_title);[/code] So, I have a class called Videos which extends the controller, within that class there is index and video, which is being called correctly (when you click on the video title, it sends you to videos/video/5 for example, 5 being the primary key of the table im working with. So basically all Im trying to do is pass that 5 back to the controller, and then have the particular video page output the particular rows data from the videos table. My function in my controller for video looks like this - [code] function video() { $data['main_content'] = 'video'; $data['video_title'] = 'test'; $this-load-view('includes/template', $data); } [/code] So ya, basically test should be instead of test, a returned value of a query which says get in the table "videos", the row with the video_id of "5", and make $data['video_title'] = value of video_title in database... Should have this figured out by now but dont, any help would be appreciated!

    Read the article

  • Hibernate Auto-Increment Setup

    - by dharga
    How do I define an entity for the following table. I've got something that isn't working and I just want to see what I'm supposed to do. USE [BAMPI_TP_dev] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[MemberSelectedOptions]( [OptionId] [int] NOT NULL, [SeqNo] [smallint] IDENTITY(1,1) NOT NULL, [OptionStatusCd] [char](1) NULL ) ON [PRIMARY] GO SET ANSI_PADDING OFF This is what I have already that isn't working. @Entity @Table(schema="dbo", name="MemberSelectedOptions") public class MemberSelectedOption extends BampiEntity implements Serializable { @Embeddable public static class MSOPK implements Serializable { private static final long serialVersionUID = 1L; @Column(name="OptionId") int optionId; @GeneratedValue(strategy=GenerationType.IDENTITY) @Column(name="SeqNo", unique=true, nullable=false) BigDecimal seqNo; //Getters and setters here... } private static final long serialVersionUID = 1L; @EmbeddedId MSOPK pk = new MSOPK(); @Column(name="OptionStatusCd") String optionStatusCd; //More Getters and setters here... } I get the following ST. [5/25/10 15:49:40:221 EDT] 0000003d JDBCException E org.slf4j.impl.JCLLoggerAdapter error Cannot insert explicit value for identity column in table 'MemberSelectedOptions' when IDENTITY_INSERT is set to OFF. [5/25/10 15:49:40:221 EDT] 0000003d AbstractFlush E org.slf4j.impl.JCLLoggerAdapter error Could not synchronize database state with session org.hibernate.exception.SQLGrammarException: could not insert: [com.bob.proj.ws.model.MemberSelectedOption] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:90) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2285) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2678) at org.hibernate.action.EntityInsertAction.execute(EntityInsertAction.java:79) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:279) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:167) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1028) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:366) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:137) at com.bcbst.bamp.ws.dao.MemberSelectedOptionDAOImpl.saveMemberSelectedOption(MemberSelectedOptionDAOImpl.java:143) at com.bcbst.bamp.ws.common.AlertReminder.saveMemberSelectedOptions(AlertReminder.java:76) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    Read the article

  • Slow retrieval of data in SQLITE takes a long using ContentProvider

    - by Arlyn
    I have an application in Android (running 4.0.3) that stores a lot of data in Table A. Table A resides in SQLite Database. I am using a ContentProvider as an abstraction layer above the database. Lots of data here means almost 80,000 records per month. Table A is structured like this: String SQL_CREATE_TABLE = "CREATE TABLE IF NOT EXISTS " + TABLE_A + " ( " + COLUMN_ID + " INTEGER PRIMARY KEY NOT NULL" + "," + COLUMN_GROUPNO + " INTEGER NOT NULL DEFAULT(0)" + "," + COLUMN_TIMESTAMP + " DATETIME UNIQUE NOT NULL" + "," + COLUMN_TAG + " TEXT" + "," + COLUMN_VALUE + " REAL NOT NULL" + "," + COLUMN_DEVICEID + " TEXT NOT NULL" + "," + COLUMN_NEW + " NUMERIC NOT NULL DEFAULT(1)" + " )"; Here is the index statement: String SQL_CREATE_INDEX_TIMESTAMP = "CREATE INDEX IF NOT EXISTS " + TABLE_A + "_" + COLUMN_TIMESTAMP + " ON " + TABLE_A + " (" + COLUMN_TIMESTAMP + ") "; I have defined the columns as well as the table name as String Constants. I am already experiencing significant slow down when retrieving this data from Table A. The problem is that when I retrieve data from this table, I first put it in an ArrayList and then I display it. Obviously, this is possibly the wrong way of doing things. I am trying to find a better way to approach this problem using a ContentProvider. But this is not the problem that bothers me. The problem is for some reason, it takes a lot longer to retrieve data from other tables which have only upto 12 records maximum. I see this delay increase as the number of records in Table A increase. This does not make any sense. I can understand the delay if I retrieve data from Table A, but why the delay in retrieving data from other tables. To clarify, I do not experience this delay if Table A is empty or has less than 3000 records. What could be the problem?

    Read the article

  • django url user id versus userprofile id problem

    - by dana
    hello there, i have a mini comunity where each user can search and find another user profile. Userprofile is a class model, indexed differently compared to user model class (user id is not equal to userprofile id) But i cannot see a user profile by typing in the url the corresponding id. I only see the profile of the currently logged in user. Why is that? I'd also want to have in my url the username (a primary key of the user table also) and NOT the id (a number). The guilty part of the code is: what can i replace that request.user with so that it wil actually display the user i searched for, and not the currently logged in? def profile_view(request, id): u = UserProfile.objects.get(pk=id) cv = UserProfile.objects.filter(created_by = request.user) blog = New.objects.filter(created_by = request.user) return render_to_response('profile/publicProfile.html', { 'u':u, 'cv':cv, 'blog':blog, }, context_instance=RequestContext(request)) in urls (of the accounts app): url(r'^profile_view/(?P<id>\d+)/$', profile_view, name='profile_view'), and in template: <h3>Recent Entries:</h3> {% load pagination_tags %} {% autopaginate list 10 %} {% paginate %} {% for object in list %} <li>{{ object.post }} <br /> Voted: {{ vote.count }} times.<br /> {% for reply in object.reply_set.all %} {{ reply.reply }} <br /> {% endfor %} <a href=''> {{ object.created_by }}</a> <br /> {{object.date}} <br /> <a href = "/vote/save_vote/{{object.id}}/">Vote this</a> <a href="/replies/save_reply/{{object.id}}/">Comment</a> </li> {% endfor %} thanks in advance!

    Read the article

  • Nhibernat mapping aspnet_Users table

    - by Michael D. Kirkpatrick
    My User table I want to map to aspnet_Users: <class name="User" table="`User`"> <id name="ID" column="ID" type="Int32" unsaved-value="0"> <generator class="native" /> </id> <property name="UserId" column="UserId" type="Guid" not-null="true" /> <property name="FullName" column="FullName" type="String" not-null="true" /> <property name="PhoneNumber" column="PhoneNumber" type="String" not-null="false" /> </class> My aspnet_Users table: <class name="aspnet_Users" table="aspnet_Users"> <id name="ID" column="UserId" type="Guid" /> <property name="UserName" column="UserName" type="string" not-null="false" /> </class> I tried adding one-to-one, one-to-many and many-to-one mappings. The closest I can get is with this error: Object of type 'System.Guid' cannot be converted to type 'System.Int32'. How do I create a 1 way mapping from User to aspnet_User via the UserId column in User? I am only wanting to create a reference so I can extract read-only information, affect sorts, etc. I still need to leave UserId column in User set up like it is now. Maybe a virtual reference keying off of UserId? Is this even possible with Nhibernate? Unfortunately it acts like it only wants to use ID from User to map to aspnet_Users. Changing the table User to have it's primary key be UserId instead of ID is not an option at this point. Any help would be greatly appreciated. Thanks in advance.

    Read the article

  • database schema eligible for delta synchronization

    - by WilliamLou
    it's a question for discussion only. Right now, I need to re-design a mysql database table. Basically, this table contains all the contract records I synchronized from another database. The contract record can be modified, deleted or users can add new contract records via GUI interface. At this stage, the table structure is exactly the same as the Contract info (column: serial number, expiry date etc.). In that case, I can only synchronize the whole table (delete all old records, replace with new ones). If I want to delta(only synchronize with modified, new, deleted records) synchronize the table, how should I change the database schema? here is the method I come up with, but I need your suggestions because I think it's a common scenario in database applications. 1)introduce a sequence number concept/column: for each sequence, mark the new added records, modified records, deleted records with this sequence number. By recording the last synchronized sequence number, only pass those records with higher sequence number; 2) because deleted contracts can be added back, and the original table has primary key constraints, should I create another table for those deleted records? or add a flag column to indicate if this contract has been deleted? I hope I explain my question clearly. Anyway, if you know any articles or your own suggestions about this, please let me know. Thanks!

    Read the article

  • Getting a users Facebook profile url

    - by Greg Pabst
    I am creating a registry site so similar people can find each other easily. I don't want to use Facebook Connect as the primary log in method or use Facebook to store their information. I'll be creating a database on my end to store that info. For security reasons I won't be displaying the users address, phone number or email address so I wanted to provide the next best way for people to connect with each other, this is where Facebook comes in. Normally I would just ask them to type their Facebook URL in a text box but I don't think most people know what their url is which is why I think I need to use Facebook Connect. So here is my idea..when the users signs up there is a check box that when checked signifies they are allowing people to find them on Facebook. I assume once they click the register button that a Facebook Connect popup will show up asking for permission to access their Facebook account. When they "allow" it, then I can get their profile url. All I need is their Facebook profile url, I don't want any other Facebook features or information. Is Facebook Connect the best thing to use for this scenario? Is there an easier way? Several months ago on the Facebook Connect site their used to be examples of doing this, but all the documentation has been rearranged and changed and I can't seem to find the information. Any help you can provide would be great!

    Read the article

  • Full Text Index type column is empty

    - by RemotecUk
    I am trying to create an index on a VarBinary(max) field in my SQL Server 2008 database. The steps I am taking are as follows: Table: dbo.Records Right click on table and select "Full Text Index" Then select "Define Index..." I choose the primary key which is the PK of my table (field name Id, type UniqueIndentifier). I then get the screen with the options Available Columns, Language for Word Breaker and Type Column I select my VarBinary(max) field called Chart as the Available Column by ticking the box. I select "English" as the Language for Word Breaker field. Then... I try to select the Type Column but there are no entries in here. I cannot proceed by clicking "Next" until this column is populated. Why are there no entries in this column for selection and what should be in there? Note 1: The VarBinary(max) field is linked to a file group if that makes any difference. Note 2: Also noticed that in the table designer I cannot set the full text option on that same field to "Yes" - its permanently stuck on "No". Thanks.

    Read the article

  • JPA @ManyToMany on only one side?

    - by Ethan Leroy
    I am trying to refresh the @ManyToMany relation but it gets cleared instead... My Project class looks like this: @Entity public class Project { ... @ManyToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER) @JoinTable(name = "PROJECT_USER", joinColumns = @JoinColumn(name = "PROJECT_ID", referencedColumnName = "ID"), inverseJoinColumns = @JoinColumn(name = "USER_ID", referencedColumnName = "ID")) private Collection<User> users; ... } But I don't have - and I don't want - the collection of Projects in the User entity. When I look at the generated database tables, they look good. They contain all columns and constraints (primary/foreign keys). But when I persist a Project that has a list of Users (and the users are still in the database), the mapping table doesn't get updated gets updated but when I refresh the project afterwards, the list of Users is cleared. For better understanding: Project project = ...; // new project with users that are available in the db System.out.println(project getUsers().size()); // prints 5 em.persist(project); System.out.println(project getUsers().size()); // prints 5 em.refresh(project); System.out.println(project getUsers().size()); // prints 0 So, how can I refresh the relation between User and Project?

    Read the article

  • I deleted a full set of columns in a save, but have the original larger sheet, Can I get that back?

    - by Ben Henley
    I have an original sheet that has over 39000 lines in it. I knocked it down to 1800 lines that I want to improt into my database, however. The Dumb#$$ that I am, I selected only the visible cells and killed like 10 columns that I need. Is there a way to compare to the original sheet using a specific column... i.e. SKU and pull the data from the origianal to put back in the missing columns or do I have to just re-edit the whole thing down again. Please help as this takes a good day or two to minimize. Any and all help is much appreciated. Below is the column list on the edited sheet vs. the original sheet. SKU DESCRIPTION VENDOR PART # RETAIL UNIT CONVERSION RETAIL U/M RETAIL DEPARTMENT VENDOR NAME SELL PACK QUANTITY BREAK SELL PACK FLAG BLISH VENDOR # FINE LINE CLASS ITEM ACTION FLAG PRIMARY UPC STOCK U/M WEIGHT LENGTH WIDTH HEIGHT SHIP-VIA EXCLUSION HAZARDOUS CODE PRICE SUGGESTED RETAIL SKU DESCRIPTION RETAIL UNIT CONVERSION RETAIL U/M RETAIL DEPARTMENT VENDOR NAME SELL PACK QUANTITY BREAK SELL PACK FLAG FINE LINE CLASS HAZARDOUS CODE PRICE SUGGESTED RETAIL RETAIL SENSITIVITY CODE 2ND UPC CODE 3RD UPC CODE 4TH UPC CODE HEADLINE BULLET #1 BULLET #2 BULLET #3 BULLET #4 BULLET #5 BULLET #6 BULLET #7 BULLET #8 BULLET #9 SIZE COLOR CASE QUANTITY PRODUCT LINE Thanks, Ben

    Read the article

  • Match entities fulfilling filter (strict superset of search)

    - by Jon
    I have an entity (let's say Person) with a set of arbitrary attributes with a known subset of values. I need to search for all of these entities that match all my filter conditions. That is, given a set of Attributes A, I need to find all people that have a set of Attributes that are a superset of A. For example, my table structures look like this: Person: id | name 1 | John Doe 2 | Jane Roe 3 | John Smith Attribute: id | attr_name 1 | Sex 2 | Eye Color ValidValue: id | attr_id | value_name 1 | 1 | Male 2 | 1 | Female 3 | 2 | Blue 4 | 2 | Green 5 | 2 | Brown PersonAttributes id | person_id | attr_id | value_id 1 | 1 | 1 | 1 2 | 1 | 2 | 3 3 | 2 | 1 | 2 4 | 2 | 2 | 4 5 | 3 | 1 | 1 6 | 3 | 2 | 4 In JPA, I have entities built for all of these tables. What I'd like to do is perform a search for all entities matching a given set of attribute-value pairs. For instance, I'd like to be able to find all males (John Doe and John Smith), all people with green eyes (Jane Roe or John Smith), or all females with green eyes (Jane Roe). I see that I can already take advantage of the fact that I only really need to match on value_id, since that's already unique and tied to the attr_id. But where can I go from there? I've been trying to do something like the following, given that the ValidValue is unique in all cases: select distinct p from Person p join p.personAttributes a where a.value IN (:values) Then I've tried putting my set of required values in as "values", but that gives me errors no matter how I try to structure that. I also have to get a little more complicated, as follows, but at this point I'd be happy with solving the first problem cleanly. However, if it's possible, the Attribute table actually has a field for default value: id | attr_name | default_value 1 | Sex | 1 2 | Eye Color | 5 If the value you're searching on happens to be the default value, I want it to return any people that have no explicit value set for that attribute, because in the application logic, that means they inherit the default value. Again, I'm more concerned about the primary question, but if someone who can help with that also has some idea of how to do this one, I'd be extremely grateful.

    Read the article

  • Are Parameters really enough to prevent Sql injections?

    - by Rune Grimstad
    I've been preaching both to my colleagues and here on SO about the goodness of using parameters in SQL queries, especially in .NET applications. I've even gone so far as to promise them as giving immunity against SQL injection attacks. But I'm starting to wonder if this really is true. Are there any known SQL injection attacks that will be successfull against a parameterized query? Can you for example send a string that causes a buffer overflow on the server? There are of course other considerations to make to ensure that a web application is safe (like sanitizing user input and all that stuff) but now I am thinking of SQL injections. I'm especially interested in attacks against MsSQL 2005 and 2008 since they are my primary databases, but all databases are interesting. Edit: To clarify what I mean by parameters and parameterized queries. By using parameters I mean using "variables" instead of building the sql query in a string. So instead of doing this: SELECT * FROM Table WHERE Name = 'a name' We do this: SELECT * FROM Table WHERE Name = @Name and then set the value of the @Name parameter on the query / command object.

    Read the article

  • From ASPX to WCF

    - by Barguast
    I'm hoping someone can advise me on how to solve my networking scenario. Both the client and server are to be C# / .NET based. I basically want to invoke some kind of web service from my client in order to retrieve both binary data (e.g. files) and serialised objects and lists of objects (e.g. database query results). At the moment, I'm using ASPX pages, using the query string to provide parameters and I get back either the binary data, or the binary data of the serialised messages. This affords me a lot of flexbility, and I can choose how to transmit the data, perform simulatanous requests, cancel ongoing requests, etc. Since I can control the serialised format, I can also deserialise lists of objects as they are received which is crucial. My problem isn't a problem as such, but this feels a little hack-ish and I can't help but wonder if there are better ways to go about it. I'm considering moving on to WCF or perhaps another technology to see if it helps. However, I need to know if it helps with my scenarios above that is; Can a WCF method return a list of objects, and can the client receive the items of this list as they arrive as opposed to getting the entire list on completion (i.e. streaming). Does anyone know of any examples of this? Am I likely to get any performance benefits from this? I don't know how well ASPX pages are tuned for this, as it surely isn't their primary purpose. Are there any other approaches I should consider? Thanks for your time spent reading this. I hope you can help.

    Read the article

  • Will rel=canonical break site: queries ?

    - by Justin Grant
    Our company publishes our software product's documentation using a custom-built content management system using a dynamic URL namespace like this: http://ourproduct.com/documentation/version/pageid Where "version" is the version number to which the documentation applies, and "pageid" is a unique string which identifies that page in our back-end content management system. For example, if content (e.g. a page about configuration best practices) is unchanged from version 3.0 and 4.0 of our product, it'd be reachable by two different URLs: http://ourproduct.com/documentation/3.0/configuration-best-practices http://ourproduct.com/documentation/4.0/configuration-best-practices This URL scheme allows us to scope Google search results to see only documentaiton for a particular product version, like this: configuration site:ourproduct.com/documentation/4.0 But when the user is searching across all versions, we don't want Google to arbitrarily choose one of the URLs to show in results. Instead, we always want the latest version to show up. Hence our planned use of rel=canonical so we can proscriptively tell Google which URL we want to show up if multiple versions are being searched. (Users who do oddball things like searching 2 versions but not all of them are a corner case, so we don't care which version(s) show up in that case-- the primary use-cases we care about is searching one version or searching all versions) But what will happen to scoped searches if we do this? If my rel=canonical URL points to version 4.0, but my search is scoped to 3.0, will Google return a result? Even if you don't know the answer offhand, do you know a site which uses rel=canonical to redirect across folders in a URL namespace. If so, I could run a few Google searches and figure out the answer.

    Read the article

  • git workflow incorporating many, but not all commits from many forks

    - by becomingGuru
    I have a git repo. It has been forked several times and many independent commits are made on top of it. Everything normal, like what happens in many github hosted projects. Now, what exact workflow should I follow, if I want to see all that commits individually and apply the ones I like. The workflow I followed, which is not the optimal is to create a branch of the name github-username and merge the changes into my master and undo any changes in the commit I dont need manually (there are not many, so it worked). What I want is the ability to see all commits from different forks individually and cherry pick and apply them on top of my master. What is the workflow to follow for that? And what gui (gitk?) enables me to see all different individual commits. I realize that merge should be a primary part of the workflow and not cherry-pick as it creates a different commit (from git's point of view). Even rebasing other's changes on top of mine might not preserve the history on the graph to indicate that it is his commits I have rebased. So then, How do I ignore just a few commits from a lot of them? I think github should have a "apply this commit on top of my master" thing in their graph after each commit node; so I can just pull it, after doing all that.

    Read the article

  • How can I work around SQL Server - Inline Table Value Function execution plan variation based on par

    - by Ovidiu Pacurar
    Here is the situation: I have a table value function with a datetime parameter ,lest's say tdf(p_date) , that filters about two million rows selecting those with column date smaller than p_date and computes some aggregate values on other columns. It works great but if p_date is a custom scalar value function (returning the end of day in my case) the execution plan is altered an the query goes from 1 sec to 1 minute execution time. A proof of concept table - 1K products, 2M rows: CREATE TABLE [dbo].[POC]( [Date] [datetime] NOT NULL, [idProduct] [int] NOT NULL, [Quantity] [int] NOT NULL ) ON [PRIMARY] The inline table value function: CREATE FUNCTION tdf (@p_date datetime) RETURNS TABLE AS RETURN ( SELECT idProduct, SUM(Quantity) AS TotalQuantity, max(Date) as LastDate FROM POC WHERE (Date < @p_date) GROUP BY idProduct ) The scalar value function: CREATE FUNCTION [dbo].[EndOfDay] (@date datetime) RETURNS datetime AS BEGIN DECLARE @res datetime SET @res=dateadd(second, -1, dateadd(day, 1, dateadd(ms, -datepart(ms, @date), dateadd(ss, -datepart(ss, @date), dateadd(mi,- datepart(mi,@date), dateadd(hh, -datepart(hh, @date), @date)))))) RETURN @res END Query 1 - Working great SELECT * FROM [dbo].[tdf] (getdate()) The end of execution plan: Stream Aggregate Cost 13% <--- Clustered Index Scan Cost 86% Query 2 - Not so great SELECT * FROM [dbo].[tdf] (dbo.EndOfDay(getdate())) The end of execution plan: Stream Aggregate Cost 4% <--- Filter Cost 12% <--- Clustered Index Scan Cost 86%

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • Masspay and MySql

    - by Mike
    Hi, I am testing Paypal's masspay using their 'MassPay NVP example' and I having difficulty trying to amend the code so inputs data from my MySql database. Basically I have user table in MySql which contains email address, status of payment (paid,unpaid) and balance. CREATE TABLE `users` ( `user_id` int(10) unsigned NOT NULL auto_increment, `email` varchar(100) collate latin1_general_ci NOT NULL, `status` enum('unpaid','paid') collate latin1_general_ci NOT NULL default 'unpaid', `balance` int(10) NOT NULL default '0', PRIMARY KEY (`user_id`) ) ENGINE=MyISAM AUTO_INCREMENT=6 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci Data : 1 [email protected] paid 100 2 [email protected] unpaid 11 3 [email protected] unpaid 20 4 [email protected] unpaid 1 5 [email protected] unpaid 20 6 [email protected] unpaid 15 I then have created a query which selects users with an unpaid balance of $10 and above : $conn = db_connect(); $query=$conn->query("SELECT * from users WHERE balance >='10' AND status = ('unpaid')"); What I would like to is for each record returned from the query for it to populate the code below: Now the code which I believe I need to amend is as follows: for($i = 0; $i < 3; $i++) { $receiverData = array( 'receiverEmail' => "[email protected]", 'amount' => "example_amount",); $receiversArray[$i] = $receiverData; } However I just can't get it to work, I have tried using mysqli_fetch_array and then replaced "[email protected]" with $row['email'] and "example_amount" with row['balance'] in various methods of coding but it doesn't work. Also I need it to loop to however many rows that were retrieved from the query as <3 in the for loop above. So the end result I am looking for is for the $nvpStr string to pass with something like this: $nvpStr = "&EMAILSUBJECT=test&RECEIVERTYPE=EmailAddress&CURRENCYCODE=USD&[email protected]&L_Amt=11&[email protected]&L_Amt=11&[email protected]&L_Amt=20&[email protected]&L_Amt=20&[email protected]&L_Amt=15"; Thanks

    Read the article

  • How should I manage my many-to-many relationships?

    - by wes
    Hello all, I have a database containing a couple tables: files and users. This relationship is many-to-many, so I also have a table called users_files_ref which holds foreign keys to both of the above tables. Here's the schema of each table: files - file_id, file_name users - user_id, user_name users_files_ref - user_file_ref_id, user_id, file_id I'm using Codeigniter to build a file host application, and I'm right in the middle of adding the functionality that enables users to upload files. This is where I'm running into my problem. Once I add a file to the files table, I will need that new file's id to update the users_files_ref table. Right now I'm adding the record to the files table, and then I imagined I'd run a query to grab the last file added, so that I can get the ID, and then use that ID to insert the new users_files_ref record. I know this will work on a small scale, but I imagine there is a better way of managing these records, especially in a heavy-traffic scenario. I am new to relational database stuff but have been around PHP for a while, so please bear with me here :-) I have primary and foreign keys set up correctly for the files, users, and users_files_ref tables, I'm just wondering how to manage the adding of file records for this scenario? Thanks for any help provided, it's much appreciated. -Wes

    Read the article

  • UUIDs in Rails3

    - by Rob Wilkerson
    I'm trying to setup my first Rails3 project and, early on, I'm running into problems with either uuidtools, my UUIDHelper or perhaps callbacks. I'm obviously trying to use UUIDs and (I think) I've set things up as described in Ariejan de Vroom's article. I've tried using the UUID as a primary key and also as simply a supplemental field, but it seems like the UUIDHelper is never being called. I've read many mentions of callbacks and/or helpers changing in Rails3, but I can't find any specifics that would tell me how to adjust. Here's my setup as it stands at this moment (there have been a few iterations): # migration class CreateImages < ActiveRecord::Migration def self.up create_table :images do |t| t.string :uuid, :limit => 36 t.string :title t.text :description t.timestamps end end ... end # lib/uuid_helper.rb require 'rubygems' require 'uuidtools' module UUIDHelper def before_create() self.uuid = UUID.timestamp_create.to_s end end # models/image.rb class Image < ActiveRecord::Base include UUIDHelper ... end Any insight would be much appreciated. Thanks.

    Read the article

< Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >