Search Results

Search found 54615 results on 2185 pages for 'sql version patching'.

Page 1581/2185 | < Previous Page | 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588  | Next Page >

  • Unique identifier for an email

    - by Skywalker
    I am writing a C# application which allows users to store emails in a MS SQL Server database. Many times, multiple users will be copied on an email from a customer. If they all try to add the same email to the database, I want to make sure that the email is only added once. MD5 springs to mind as a way to do this. I don't need to worry about malicious tampering, only to make sure that the same email will map to the same hash and that no two emails with different content will map to the same hash. My question really boils down to how one would combine multiple fields into one MD5 (or other) hash value. Some of these fields will have a single value per email (e.g. subject, body, sender email address) while others will have multiple values (varying numbers of attachments, recipients). I want to develop a way of uniquely identifying an email that will be platform and language independent (not based on serialization). Any advice?

    Read the article

  • JQuery Autocomplete - format listing and only return a part to the text box

    - by Jason
    I'm using the JQuery Autocomplete and it's working just fine. I'm using it to allow someone to search for users out of a database by searching on last name or id number. Right now the drop down list that is created is the resultes of a SQL query and looks something like: $row_rst['lName'] . ', ' . $row_rst['fName'] . " - " . $row_rst['user'] . "|" . $row_rst['id'] which outputs something like: Jones, Henry - hjones Gibbons, Peter - pgibbons When I pick Henry the text box gets Jones, Henry - hjones and the hidden field gets his id. I'd like to format the drop down in columns if possible and only return Jones, Henry to the text box if possible. Are either of those options possible? I'm thinking it has to do with either formatItem(row) or formatResult(row) but I'm not sure and I can't seem to find how to go about this.

    Read the article

  • Sitemaps on multiple front end servers using a http handler, is that a good idea?

    - by Rihan Meij
    Question 1 We would like to generate a site map for our CMS site We have multiple front end servers with approx a million articles. Environment multiple MS SQL servers multiple front end servers (load balanced) ASP.net - and IIS 6 Windows 2003 To have the site maps (the site map index file, and the site map files) physically on the front end servers will be a operations nightmare and error prone. So we are considering using http handlers instead so that it does not matter what server gets the request, it will be able to serve the correct xml file. Question 2 If we ping Google each time we publish a new article will that effect us negatively if that happens more than once a hour. Thanks!

    Read the article

  • Why can't I see the 'dataset project' property in Visual Studio DataSet designer?

    - by Ryan
    Hi, I am trying to follow 'n tier app design' tutorials and they tell me to set the DataSet Project property from the Data Set Designer in VS, to split table adaptors and entities into seprate projects. I can't see that property! (I'm looking in the same place shown on the videos... all other properties match) Does anybody know why? The video is here http://windowsclient.net/learn/video.aspx?v=14625 (4:36 is where the property is set) I'm using VS c# 2008 Express, with SQL Server Express 2008. Thanks a lot for any help Ryan

    Read the article

  • ISQL Perform instruction: after editadd editupdate of table vs. after add update of table

    - by Frank Computer
    INFORMIX-SQL 7.3 Perform Screens: According to documentation, in an "after editadd editupdate of table" control block, its instructions are executed before the row is added or updated to the table, whereas in an "after add update of table" control block, its instructions are executed after the row has been added or updated to the table, supposedly meaning that any instructions which would alter values of field-tags linked to table.columns would not be committed to the table, but field-tags linked to displayonly fields will change?.. However, when using "after add update of table" I placed instructions which alter values for field-tags linked to table.columns and their displayed and committed values also changed!.. I would have thought that an "after add update of table" would only alter displayonly fields.

    Read the article

  • Integer in MySQL procedure, syntax error

    - by confiq
    I made simple procedure just to demonstrate CREATE PROCEDURE `demo`(demo_int int) BEGIN DECLARE minid INT; SELECT min(id) FROM (SELECT id FROM events LIMIT demo_int,9999999999999999) as hoo INTO minid; END$$ The problem is with demo_int, if i change it to LIMIT 1,9999999999999999 it works but LIMIT demo_int,9999999999999999 Does not... It gives error You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'demo_int,9999999999999999) as hoo INTO minid; END' at line 4 (errno: 1064) Any clues?

    Read the article

  • reinventing the wheels: Node.JS/Event-driven programming v.s. Functional Programming?

    - by ivanTheTerrible
    Now there's all the hype lately about Node.JS, an event driven framework using Javascript callbacks. To my limited understanding, its primary advantage seems to be that you don't have to wait step by step sequentially (for example, you can fetch the SQL results, while calling other functions too). So my question is: how is this different, or better than just functional languages, like CL, Haskell, Clojure etc? If not better, then why don't people just do functional languages then (instead of reinventing the wheel with Javascript)? Please note that I have none experience in either Node.JS nor functional programming. So some basic explanation can be helpful.

    Read the article

  • How to store data which contents the quotes in mysql

    - by Nitz
    Hey Guys, i have one problem. In one of my form i have use rich text editor from the yahoo. now i want to store the data from that text area to mysql database. bcz user can enter anything in that textarea. as example user can enter many double quotes, or single quotes. so i need to store that data which may content many double quotes or many single quotes, so how to do that? normally we store by adding that data in one variable and then put that in sql then fire. but now variable contents many quotes and now i have problem to store. i can't remove that quotes bcz of my style which is generated by rich text editor. So how can store that data without affecting my styles of data.

    Read the article

  • AppFabric Cache errors

    - by Joseph
    The AppFabric Cache in our production crashes almost every day, and is highly unstable. The below errors are logged: Microsoft.ApplicationServer.Caching.DataCacheException: ErrorCode:SubStatus:There is a temporary failure. Please retry later. (Sufficient secondaries not present or they are in throttled state.) Microsoft.ApplicationServer.Caching.DataCacheException: ErrorCode:SubStatus:There is a temporary failure. Please retry later. (The request did not find the primary.) AppFabric Caching service crashed.{Lease with external store expired: Microsoft.Fabric.Federation.ExternalRingStateStoreException: Lease already expired at Microsoft.Fabric.Data.ExternalStoreAuthority.UpdateNode(NodeInfo nodeInfo, TimeSpan timeout) at Microsoft.Fabric.Federation.SiteNode.PerformExternalRingStateStoreOperations(Boolean& canFormRing, Boolean isInsert, Boolean isJoining)} Could someone please provide me some inputs? This is a HA enabled cache environment with 3 cache hosts. All of them are running on Windows Server 2008 Enterprise Edition, and the SQL Server is used for config.

    Read the article

  • MembershipProvider, IPrincipal, IIdentity?

    - by MRFerocius
    Hello guys; I have a conceptual question... I am making an Intranet application (Web platform) for a company. I have a SQL Server DB with these tables: Users (userID, userName, userPass, roleID) Roles (roleID, roleName) Pages (pageID, pageURL) RolesXPages(pageID, roleID) How is the best way to create a structure to store all this information while the user navigates the site, I mean, on the thread I should be able to check his role, his pages (the ones he can access) I have been reading and there is a lot of stuff there where Im confused, I saw the MembershipProvider, IPrincipal, IIdentity, etc classes but Im not sure what should be the best one for me. Any thoughts... Thanks in advance! Edit: Everytime gets more confusing... I just want to handle those structures at runtime and be able to mantain state during page callbacks or changing pages...

    Read the article

  • Can't set up image upload in Django

    - by culebrón
    I can't understand what's not working here: 1) settings MEDIA_ROOT = '/var/www/satel/media' MEDIA_URL = 'http://media.satel.culebron' ADMIN_MEDIA_PREFIX = '/media/' 2) models class Photo(models.Model): id = models.AutoField(primary_key=True) name = models.CharField(max_length = 200) desc = models.TextField(max_length = 1000) img = models.ImageField(upload_to = 'upload') 3) access rights: drwxr-xrwx 3 culebron culebron 4.0K 2010-04-14 21:13 media drwxr-xrwx 2 culebron culebron 4.0K 2010-04-14 19:04 upload 4) SQL: CREATE TABLE "photos_photo" ( "id" integer NOT NULL PRIMARY KEY, "name" varchar(200) NOT NULL, "desc" text NOT NULL, "img" varchar(100) NOT NULL ); 4) run Django test server as myself. 5) result: SuspiciousOperation at /admin/photos/author/add/ Attempted access to '/var/www/satel/upload/OpenStreetMap.png' denied. Not a PIL & jpeg issue, seems not to be access rights issue. But what's wrong?

    Read the article

  • Convert from Procedural to Object Oriented Code

    - by Anthony
    I have been reading Working Effectively with Legacy Code and Clean Code with the goal of learning strategies on how to begin cleaning up the existing code-base of a large ASP.NET webforms application. This system has been around since 2005 and since then has undergone a number of enhancements. Originally the code was structured as follows (and is still largely structured this way): ASP.NET (aspx/ascx) Code-behind (c#) Business Logic Layer (c#) Data Access Layer (c#) Database (Oracle) The main issue is that the code is procedural masquerading as object-oriented. It virtually violates all of the guidelines described in both books. This is an example of a typical class in the Business Logic Layer: public class AddressBO { public TransferObject GetAddress(string addressID) { if (StringUtils.IsNull(addressID)) { throw new ValidationException("Address ID must be entered"); } AddressDAO addressDAO = new AddressDAO(); return addressDAO.GetAddress(addressID); } public TransferObject Insert(TransferObject addressDetails) { if (StringUtils.IsNull(addressDetails.GetString("EVENT_ID")) || StringUtils.IsNull(addressDetails.GetString("LOCALITY")) || StringUtils.IsNull(addressDetails.GetString("ADDRESS_TARGET")) || StringUtils.IsNull(addressDetails.GetString("ADDRESS_TYPE_CODE")) || StringUtils.IsNull(addressDetails.GetString("CREATED_BY"))) { throw new ValidationException( "You must enter an Event ID, Locality, Address Target, Address Type Code and Created By."); } string addressID = Sequence.GetNextValue("ADDRESS_ID_SEQ"); addressDetails.SetValue("ADDRESS_ID", addressID); string syncID = Sequence.GetNextValue("SYNC_ID_SEQ"); addressDetails.SetValue("SYNC_ADDRESS_ID", syncID); TransferObject syncDetails = new TransferObject(); Transaction transaction = new Transaction(); try { AddressDAO addressDAO = new AddressDAO(); addressDAO.Insert(addressDetails, transaction); // insert the record for the target TransferObject addressTargetDetails = new TransferObject(); switch (addressDetails.GetString("ADDRESS_TARGET")) { case "PARTY_ADDRESSES": { addressTargetDetails.SetValue("ADDRESS_ID", addressID); addressTargetDetails.SetValue("ADDRESS_TYPE_CODE", addressDetails.GetString("ADDRESS_TYPE_CODE")); addressTargetDetails.SetValue("PARTY_ID", addressDetails.GetString("PARTY_ID")); addressTargetDetails.SetValue("EVENT_ID", addressDetails.GetString("EVENT_ID")); addressTargetDetails.SetValue("CREATED_BY", addressDetails.GetString("CREATED_BY")); addressDAO.InsertPartyAddress(addressTargetDetails, transaction); break; } case "PARTY_CONTACT_ADDRESSES": { addressTargetDetails.SetValue("ADDRESS_ID", addressID); addressTargetDetails.SetValue("ADDRESS_TYPE_CODE", addressDetails.GetString("ADDRESS_TYPE_CODE")); addressTargetDetails.SetValue("PUBLIC_RELEASE_FLAG", addressDetails.GetString("PUBLIC_RELEASE_FLAG")); addressTargetDetails.SetValue("CONTACT_ID", addressDetails.GetString("CONTACT_ID")); addressTargetDetails.SetValue("EVENT_ID", addressDetails.GetString("EVENT_ID")); addressTargetDetails.SetValue("CREATED_BY", addressDetails.GetString("CREATED_BY")); addressDAO.InsertContactAddress(addressTargetDetails, transaction); break; } << many more cases here >> default: { break; } } // synchronise SynchronisationBO synchronisationBO = new SynchronisationBO(); syncDetails = synchronisationBO.Synchronise("I", transaction, "ADDRESSES", addressDetails.GetString("ADDRESS_TARGET"), addressDetails, addressTargetDetails); // commit transaction.Commit(); } catch (Exception) { transaction.Rollback(); throw; } return new TransferObject("ADDRESS_ID", addressID, "SYNC_DETAILS", syncDetails); } << many more methods are here >> } It has a lot of duplication, the class has a number of responsibilities, etc, etc - it is just generally 'un-clean' code. All of the code throughout the system is dependent on concrete implementations. This is an example of a typical class in the Data Access Layer: public class AddressDAO : GenericDAO { public static readonly string BASE_SQL_ADDRESSES = "SELECT " + " a.address_id, " + " a.event_id, " + " a.flat_unit_type_code, " + " fut.description as flat_unit_description, " + " a.flat_unit_num, " + " a.floor_level_code, " + " fl.description as floor_level_description, " + " a.floor_level_num, " + " a.building_name, " + " a.lot_number, " + " a.street_number, " + " a.street_name, " + " a.street_type_code, " + " st.description as street_type_description, " + " a.street_suffix_code, " + " ss.description as street_suffix_description, " + " a.postal_delivery_type_code, " + " pdt.description as postal_delivery_description, " + " a.postal_delivery_num, " + " a.locality, " + " a.state_code, " + " s.description as state_description, " + " a.postcode, " + " a.country, " + " a.lock_num, " + " a.created_by, " + " TO_CHAR(a.created_datetime, '" + SQL_DATETIME_FORMAT + "') as created_datetime, " + " a.last_updated_by, " + " TO_CHAR(a.last_updated_datetime, '" + SQL_DATETIME_FORMAT + "') as last_updated_datetime, " + " a.sync_address_id, " + " a.lat," + " a.lon, " + " a.validation_confidence, " + " a.validation_quality, " + " a.validation_status " + "FROM ADDRESSES a, FLAT_UNIT_TYPES fut, FLOOR_LEVELS fl, STREET_TYPES st, " + " STREET_SUFFIXES ss, POSTAL_DELIVERY_TYPES pdt, STATES s " + "WHERE a.flat_unit_type_code = fut.flat_unit_type_code(+) " + "AND a.floor_level_code = fl.floor_level_code(+) " + "AND a.street_type_code = st.street_type_code(+) " + "AND a.street_suffix_code = ss.street_suffix_code(+) " + "AND a.postal_delivery_type_code = pdt.postal_delivery_type_code(+) " + "AND a.state_code = s.state_code(+) "; public TransferObject GetAddress(string addressID) { //Build the SELECT Statement StringBuilder selectStatement = new StringBuilder(BASE_SQL_ADDRESSES); //Add WHERE condition selectStatement.Append(" AND a.address_id = :addressID"); ArrayList parameters = new ArrayList{DBUtils.CreateOracleParameter("addressID", OracleDbType.Decimal, addressID)}; // Execute the SELECT statement Query query = new Query(); DataSet results = query.Execute(selectStatement.ToString(), parameters); // Check if 0 or more than one rows returned if (results.Tables[0].Rows.Count == 0) { throw new NoDataFoundException(); } if (results.Tables[0].Rows.Count > 1) { throw new TooManyRowsException(); } // Return a TransferObject containing the values return new TransferObject(results); } public void Insert(TransferObject insertValues, Transaction transaction) { // Store Values string addressID = insertValues.GetString("ADDRESS_ID"); string syncAddressID = insertValues.GetString("SYNC_ADDRESS_ID"); string eventID = insertValues.GetString("EVENT_ID"); string createdBy = insertValues.GetString("CREATED_BY"); // postal delivery string postalDeliveryTypeCode = insertValues.GetString("POSTAL_DELIVERY_TYPE_CODE"); string postalDeliveryNum = insertValues.GetString("POSTAL_DELIVERY_NUM"); // unit/building string flatUnitTypeCode = insertValues.GetString("FLAT_UNIT_TYPE_CODE"); string flatUnitNum = insertValues.GetString("FLAT_UNIT_NUM"); string floorLevelCode = insertValues.GetString("FLOOR_LEVEL_CODE"); string floorLevelNum = insertValues.GetString("FLOOR_LEVEL_NUM"); string buildingName = insertValues.GetString("BUILDING_NAME"); // street string lotNumber = insertValues.GetString("LOT_NUMBER"); string streetNumber = insertValues.GetString("STREET_NUMBER"); string streetName = insertValues.GetString("STREET_NAME"); string streetTypeCode = insertValues.GetString("STREET_TYPE_CODE"); string streetSuffixCode = insertValues.GetString("STREET_SUFFIX_CODE"); // locality/state/postcode/country string locality = insertValues.GetString("LOCALITY"); string stateCode = insertValues.GetString("STATE_CODE"); string postcode = insertValues.GetString("POSTCODE"); string country = insertValues.GetString("COUNTRY"); // esms address string esmsAddress = insertValues.GetString("ESMS_ADDRESS"); //address/GPS string lat = insertValues.GetString("LAT"); string lon = insertValues.GetString("LON"); string zoom = insertValues.GetString("ZOOM"); //string validateDate = insertValues.GetString("VALIDATED_DATE"); string validatedBy = insertValues.GetString("VALIDATED_BY"); string confidence = insertValues.GetString("VALIDATION_CONFIDENCE"); string status = insertValues.GetString("VALIDATION_STATUS"); string quality = insertValues.GetString("VALIDATION_QUALITY"); // the insert statement StringBuilder insertStatement = new StringBuilder("INSERT INTO ADDRESSES ("); StringBuilder valuesStatement = new StringBuilder("VALUES ("); ArrayList parameters = new ArrayList(); // build the insert statement insertStatement.Append("ADDRESS_ID, EVENT_ID, CREATED_BY, CREATED_DATETIME, LOCK_NUM "); valuesStatement.Append(":addressID, :eventID, :createdBy, SYSDATE, 1 "); parameters.Add(DBUtils.CreateOracleParameter("addressID", OracleDbType.Decimal, addressID)); parameters.Add(DBUtils.CreateOracleParameter("eventID", OracleDbType.Decimal, eventID)); parameters.Add(DBUtils.CreateOracleParameter("createdBy", OracleDbType.Varchar2, createdBy)); // build the insert statement if (!StringUtils.IsNull(syncAddressID)) { insertStatement.Append(", SYNC_ADDRESS_ID"); valuesStatement.Append(", :syncAddressID"); parameters.Add(DBUtils.CreateOracleParameter("syncAddressID", OracleDbType.Decimal, syncAddressID)); } if (!StringUtils.IsNull(postalDeliveryTypeCode)) { insertStatement.Append(", POSTAL_DELIVERY_TYPE_CODE"); valuesStatement.Append(", :postalDeliveryTypeCode "); parameters.Add(DBUtils.CreateOracleParameter("postalDeliveryTypeCode", OracleDbType.Varchar2, postalDeliveryTypeCode)); } if (!StringUtils.IsNull(postalDeliveryNum)) { insertStatement.Append(", POSTAL_DELIVERY_NUM"); valuesStatement.Append(", :postalDeliveryNum "); parameters.Add(DBUtils.CreateOracleParameter("postalDeliveryNum", OracleDbType.Varchar2, postalDeliveryNum)); } if (!StringUtils.IsNull(flatUnitTypeCode)) { insertStatement.Append(", FLAT_UNIT_TYPE_CODE"); valuesStatement.Append(", :flatUnitTypeCode "); parameters.Add(DBUtils.CreateOracleParameter("flatUnitTypeCode", OracleDbType.Varchar2, flatUnitTypeCode)); } if (!StringUtils.IsNull(lat)) { insertStatement.Append(", LAT"); valuesStatement.Append(", :lat "); parameters.Add(DBUtils.CreateOracleParameter("lat", OracleDbType.Decimal, lat)); } if (!StringUtils.IsNull(lon)) { insertStatement.Append(", LON"); valuesStatement.Append(", :lon "); parameters.Add(DBUtils.CreateOracleParameter("lon", OracleDbType.Decimal, lon)); } if (!StringUtils.IsNull(zoom)) { insertStatement.Append(", ZOOM"); valuesStatement.Append(", :zoom "); parameters.Add(DBUtils.CreateOracleParameter("zoom", OracleDbType.Decimal, zoom)); } if (!StringUtils.IsNull(flatUnitNum)) { insertStatement.Append(", FLAT_UNIT_NUM"); valuesStatement.Append(", :flatUnitNum "); parameters.Add(DBUtils.CreateOracleParameter("flatUnitNum", OracleDbType.Varchar2, flatUnitNum)); } if (!StringUtils.IsNull(floorLevelCode)) { insertStatement.Append(", FLOOR_LEVEL_CODE"); valuesStatement.Append(", :floorLevelCode "); parameters.Add(DBUtils.CreateOracleParameter("floorLevelCode", OracleDbType.Varchar2, floorLevelCode)); } if (!StringUtils.IsNull(floorLevelNum)) { insertStatement.Append(", FLOOR_LEVEL_NUM"); valuesStatement.Append(", :floorLevelNum "); parameters.Add(DBUtils.CreateOracleParameter("floorLevelNum", OracleDbType.Varchar2, floorLevelNum)); } if (!StringUtils.IsNull(buildingName)) { insertStatement.Append(", BUILDING_NAME"); valuesStatement.Append(", :buildingName "); parameters.Add(DBUtils.CreateOracleParameter("buildingName", OracleDbType.Varchar2, buildingName)); } if (!StringUtils.IsNull(lotNumber)) { insertStatement.Append(", LOT_NUMBER"); valuesStatement.Append(", :lotNumber "); parameters.Add(DBUtils.CreateOracleParameter("lotNumber", OracleDbType.Varchar2, lotNumber)); } if (!StringUtils.IsNull(streetNumber)) { insertStatement.Append(", STREET_NUMBER"); valuesStatement.Append(", :streetNumber "); parameters.Add(DBUtils.CreateOracleParameter("streetNumber", OracleDbType.Varchar2, streetNumber)); } if (!StringUtils.IsNull(streetName)) { insertStatement.Append(", STREET_NAME"); valuesStatement.Append(", :streetName "); parameters.Add(DBUtils.CreateOracleParameter("streetName", OracleDbType.Varchar2, streetName)); } if (!StringUtils.IsNull(streetTypeCode)) { insertStatement.Append(", STREET_TYPE_CODE"); valuesStatement.Append(", :streetTypeCode "); parameters.Add(DBUtils.CreateOracleParameter("streetTypeCode", OracleDbType.Varchar2, streetTypeCode)); } if (!StringUtils.IsNull(streetSuffixCode)) { insertStatement.Append(", STREET_SUFFIX_CODE"); valuesStatement.Append(", :streetSuffixCode "); parameters.Add(DBUtils.CreateOracleParameter("streetSuffixCode", OracleDbType.Varchar2, streetSuffixCode)); } if (!StringUtils.IsNull(locality)) { insertStatement.Append(", LOCALITY"); valuesStatement.Append(", :locality"); parameters.Add(DBUtils.CreateOracleParameter("locality", OracleDbType.Varchar2, locality)); } if (!StringUtils.IsNull(stateCode)) { insertStatement.Append(", STATE_CODE"); valuesStatement.Append(", :stateCode"); parameters.Add(DBUtils.CreateOracleParameter("stateCode", OracleDbType.Varchar2, stateCode)); } if (!StringUtils.IsNull(postcode)) { insertStatement.Append(", POSTCODE"); valuesStatement.Append(", :postcode "); parameters.Add(DBUtils.CreateOracleParameter("postcode", OracleDbType.Varchar2, postcode)); } if (!StringUtils.IsNull(country)) { insertStatement.Append(", COUNTRY"); valuesStatement.Append(", :country "); parameters.Add(DBUtils.CreateOracleParameter("country", OracleDbType.Varchar2, country)); } if (!StringUtils.IsNull(esmsAddress)) { insertStatement.Append(", ESMS_ADDRESS"); valuesStatement.Append(", :esmsAddress "); parameters.Add(DBUtils.CreateOracleParameter("esmsAddress", OracleDbType.Varchar2, esmsAddress)); } if (!StringUtils.IsNull(validatedBy)) { insertStatement.Append(", VALIDATED_DATE"); valuesStatement.Append(", SYSDATE "); insertStatement.Append(", VALIDATED_BY"); valuesStatement.Append(", :validatedBy "); parameters.Add(DBUtils.CreateOracleParameter("validatedBy", OracleDbType.Varchar2, validatedBy)); } if (!StringUtils.IsNull(confidence)) { insertStatement.Append(", VALIDATION_CONFIDENCE"); valuesStatement.Append(", :confidence "); parameters.Add(DBUtils.CreateOracleParameter("confidence", OracleDbType.Decimal, confidence)); } if (!StringUtils.IsNull(status)) { insertStatement.Append(", VALIDATION_STATUS"); valuesStatement.Append(", :status "); parameters.Add(DBUtils.CreateOracleParameter("status", OracleDbType.Varchar2, status)); } if (!StringUtils.IsNull(quality)) { insertStatement.Append(", VALIDATION_QUALITY"); valuesStatement.Append(", :quality "); parameters.Add(DBUtils.CreateOracleParameter("quality", OracleDbType.Decimal, quality)); } // finish off the statement insertStatement.Append(") "); valuesStatement.Append(")"); // build the insert statement string sql = insertStatement + valuesStatement.ToString(); // Execute the INSERT Statement Dml dmlDAO = new Dml(); int rowsAffected = dmlDAO.Execute(sql, transaction, parameters); if (rowsAffected == 0) { throw new NoRowsAffectedException(); } } << many more methods go here >> } This system was developed by me and a small team back in 2005 after a 1 week .NET course. Before than my experience was in client-server applications. Over the past 5 years I've come to recognise the benefits of automated unit testing, automated integration testing and automated acceptance testing (using Selenium or equivalent) but the current code-base seems impossible to introduce these concepts. We are now starting to work on a major enhancement project with tight time-frames. The team consists of 5 .NET developers - 2 developers with a few years of .NET experience and 3 others with little or no .NET experience. None of the team (including myself) has experience in using .NET unit testing or mocking frameworks. What strategy would you use to make this code cleaner, more object-oriented, testable and maintainable?

    Read the article

  • Doing a large number of upserts as fast as possible

    - by Jason Swett
    My app (which uses MySQL) is doing a large number of subsequent upserts. Right now my SQL looks like this: INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL OR','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('TRACY L WALTER PERSONAL REP FOR','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('TRACY L WALTER PERSONAL REP FOR','123','123','123') So far I've found INSERT IGNORE to be the fastest way to achieve upserts. Selecting a record to see if it exists and then either updating it or inserting a new one is too slow. Even this is not as fast as I'd like because I need to do a separate statement for each record. Sometimes I'll have around 50,000 of these statements in a row. Is there a way to take care of all of these in just one statement, without deleting any existing records?

    Read the article

  • Improved Performance on PeopleSoft Combined Benchmark using SPARC T4-4

    - by Brian
    Oracle's SPARC T4-4 server running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved a world record 18,000 concurrent users experiencing subsecond response time while executing a PeopleSoft Payroll batch job of 500,000 employees in 32.4 minutes. This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier. The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment. The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 47% (online and batch) leaving significant headroom for additional processing across the three tiers. The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices. Performance Landscape Results are presented for the PeopleSoft HRMS Self-Service and Payroll combined benchmark. The new result with 128 streams shows significant improvement in the payroll batch processing time with little impact on the self-service component response time. PeopleSoft HRMS Self-Service and Payroll Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.988 0.539 32.4 128 SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.944 0.503 43.3 64 The following results are for the PeopleSoft HRMS Self-Service benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the payroll component. PeopleSoft HRMS Self-Service 9.1 Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) 2x SPARC T4-2 (db) 18,000 1.048 0.742 N/A N/A The following results are for the PeopleSoft Payroll benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the self-service component. PeopleSoft Payroll (N.A.) 9.1 - 500K Employees (7 Million SQL PayCalc, Unicode) Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-4 (db) N/A N/A N/A 30.84 96 Configuration Summary Application Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 512 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 PeopleSoft HCM 9.1 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Java Platform, Standard Edition Development Kit 6 Update 32 Database Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 256 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 PeopleTools 8.52 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Micro Focus Server Express (COBOL v 5.1.00) Web Tier Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 Oracle WebLogic Server 10.3.4 Java Platform, Standard Edition Development Kit 6 Update 32 Storage Configuration: 1 x Sun Server X2-4 as a COMSTAR head for data 4 x Intel Xeon X7550, 2.0 GHz 128 GB memory 1 x Sun Storage F5100 Flash Array (80 flash modules) 1 x Sun Storage F5100 Flash Array (40 flash modules) 1 x Sun Fire X4275 as a COMSTAR head for redo logs 12 x 2 TB SAS disks with Niwot Raid controller Benchmark Description This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2. The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published. PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions. All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions. The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes. The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state. Key Points and Best Practices Two PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning. Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads. A total of 128 PeopleSoft streams server processes where used on the database node to complete payroll batch job of 500,000 employees in 32.4 minutes. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN PeopleSoft Enterprise Human Capital Managementoracle.com OTN PeopleSoft Enterprise Human Capital Management (Payroll) oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 8 November 2012.

    Read the article

  • Python and Postgresql

    - by Ian
    Hi all, if you wanted to manipulate the data in a table in a postgresql database using some python (maybe running a little analysis on the result set using scipy) and then wanted to export that data back into another table in the same database, how would you go about the implementation? Is the only/best way to do this to simply run the query, have python store it in an array, manipulate the array in python and then run another sql statement to output to the database? I'm really just asking, is there a more efficient way to deal with the data? Thanks, Ian

    Read the article

  • A PHP design pattern for the model part [PHP Zend Framework]

    - by Matthieu
    I have a PHP MVC application using Zend Framework. As presented in the quickstart, I use 3 layers for the model part : Model (business logic) Data mapper Table data gateway (or data access object, i.e. one class per SQL table) The model is UML designed and totally independent of the DB. My problem is : I can't have multiple instances of the same "instance/record". For example : if I get, for example, the user "Chuck Norris" with id=5, this will create a new model instance wich members will be filled by the data mapper (the data mapper query the table data gateway that query the DB). Then, if I change the name to "Duck Norras", don't save it in DB right away, and re-load the same user in another variable, I have "synchronisation" problems... (different instances for the same "record") Right now, I use the Multiton pattern : like Singleton, but multiple instances indexed by a key (wich is the user ID in our example). But this is complicating my developpement a lot, and my testings too. How to do it right ?

    Read the article

  • Using Node.js as an accelerator for WCF REST services

    - by Elton Stoneman
    Node.js is a server-side JavaScript platform "for easily building fast, scalable network applications". It's built on Google's V8 JavaScript engine and uses an (almost) entirely async event-driven processing model, running in a single thread. If you're new to Node and your reaction is "why would I want to run JavaScript on the server side?", this is the headline answer: in 150 lines of JavaScript you can build a Node.js app which works as an accelerator for WCF REST services*. It can double your messages-per-second throughput, halve your CPU workload and use one-fifth of the memory footprint, compared to the WCF services direct.   Well, it can if: 1) your WCF services are first-class HTTP citizens, honouring client cache ETag headers in request and response; 2) your services do a reasonable amount of work to build a response; 3) your data is read more often than it's written. In one of my projects I have a set of REST services in WCF which deal with data that only gets updated weekly, but which can be read hundreds of times an hour. The services issue ETags and will return a 304 if the client sends a request with the current ETag, which means in the most common scenario the client uses its local cached copy. But when the weekly update happens, then all the client caches are invalidated and they all need the same new data. Then the service will get hundreds of requests with old ETags, and they go through the full service stack to build the same response for each, taking up threads and processing time. Part of that processing means going off to a database on a separate cloud, which introduces more latency and downtime potential.   We can use ASP.NET output caching with WCF to solve the repeated processing problem, but the server will still be thread-bound on incoming requests, and to get the current ETags reliably needs a database call per request. The accelerator solves that by running as a proxy - all client calls come into the proxy, and the proxy routes calls to the underlying REST service. We could use Node as a straight passthrough proxy and expect some benefit, as the server would be less thread-bound, but we would still have one WCF and one database call per proxy call. But add some smart caching logic to the proxy, and share ETags between Node and WCF (so the proxy doesn't even need to call the servcie to get the current ETag), and the underlying service will only be invoked when data has changed, and then only once - all subsequent client requests will be served from the proxy cache.   I've built this as a sample up on GitHub: NodeWcfAccelerator on sixeyed.codegallery. Here's how the architecture looks:     The code is very simple. The Node proxy runs on port 8010 and all client requests target the proxy. If the client request has an ETag header then the proxy looks up the ETag in the tag cache to see if it is current - the sample uses memcached to share ETags between .NET and Node. If the ETag from the client matches the current server tag, the proxy sends a 304 response with an empty body to the client, telling it to use its own cached version of the data. If the ETag from the client is stale, the proxy looks for a local cached version of the response, checking for a file named after the current ETag. If that file exists, its contents are returned to the client as the body in a 200 response, which includes the current ETag in the header. If the proxy does not have a local cached file for the service response, it calls the service, and writes the WCF response to the local cache file, and to the body of a 200 response for the client. So the WCF service is only troubled if both client and proxy have stale (or no) caches.   The only (vaguely) clever bit in the sample is using the ETag cache, so the proxy can serve cached requests without any communication with the underlying service, which it does completely generically, so the proxy has no notion of what it is serving or what the services it proxies are doing. The relative path from the URL is used as the lookup key, so there's no shared key-generation logic between .NET and Node, and when WCF stores a tag it also stores the "read" URL against the ETag so it can be used for a reverse lookup, e.g:   Key Value /WcfSampleService/PersonService.svc/rest/fetch/3 "28cd4796-76b8-451b-adfd-75cb50a50fa6" "28cd4796-76b8-451b-adfd-75cb50a50fa6" /WcfSampleService/PersonService.svc/rest/fetch/3    In Node we read the cache using the incoming URL path as the key and we know that "28cd4796-76b8-451b-adfd-75cb50a50fa6" is the current ETag; we look for a local cached response in /caches/28cd4796-76b8-451b-adfd-75cb50a50fa6.body (and the corresponding .header file which contains the original service response headers, so the proxy response is exactly the same as the underlying service). When the data is updated, we need to invalidate the ETag cache – which is why we need the reverse lookup in the cache. In the WCF update service, we don't need to know the URL of the related read service - we fetch the entity from the database, do a reverse lookup on the tag cache using the old ETag to get the read URL, update the new ETag against the URL, store the new reverse lookup and delete the old one.   Running Apache Bench against the two endpoints gives the headline performance comparison. Making 1000 requests with concurrency of 100, and not sending any ETag headers in the requests, with the Node proxy I get 102 requests handled per second, average response time of 975 milliseconds with 90% of responses served within 850 milliseconds; going direct to WCF with the same parameters, I get 53 requests handled per second, mean response time of 1853 milliseconds, with 90% of response served within 3260 milliseconds. Informally monitoring server usage during the tests, Node maxed at 20% CPU and 20Mb memory; IIS maxed at 60% CPU and 100Mb memory.   Note that the sample WCF service does a database read and sleeps for 250 milliseconds to simulate a moderate processing load, so this is *not* a baseline Node-vs-WCF comparison, but for similar scenarios where the  service call is expensive but applicable to numerous clients for a long timespan, the performance boost from the accelerator is considerable.     * - actually, the accelerator will work nicely for any HTTP request, where the URL (path + querystring) uniquely identifies a resource. In the sample, there is an assumption that the ETag is a GUID wrapped in double-quotes (e.g. "28cd4796-76b8-451b-adfd-75cb50a50fa6") – which is the default for WCF services. I use that assumption to name the cache files uniquely, but it is a trivial change to adapt to other ETag formats.

    Read the article

  • ruby on rails named scope implementation

    - by Engwan
    From the book Agile Web Development With Rails class Order < ActiveRecord::Base named_scope :last_n_days, lambda { |days| {:conditions => ['updated < ?' , days] } } named_scope :checks, :conditions => {:pay_type => :check} end The statement orders = Orders.checks.last_n_days(7) will result to only one query to the database. How does rails implement this? I'm new to Ruby and I'm wondering if there's a special construct that allows this to happen. To be able to chain methods like that, the functions generated by named_scope must be returning themselves or an object than can be scoped further. But how does Ruby know that it is the last function call and that it should query the database now? I ask this because the statement above actually queries the database and not just returns an SQL statement that results from chaining.

    Read the article

  • [C#] How to convert string encoded in windows-1250 to unicode ?

    - by Deveti Putnik
    Hi! I am receiving from some dll (which is wrapper for some external data source) strings in Windows-1250 codepage and I would like to insert them correctly (as unicode) to table in SQL Server Database. Since particular row in database which should hold that data is of NVarchar type, I only needed to convert it in my C# code to unicode and pass it as parameter. Everything is well and nice, but I stumbled on conversion step. I tried the following but that doesn't work: private static String getUnicodeValue(string string2Encode) // { Encoding srcEncoding = Encoding.GetEncoding("Windows-1250"); UnicodeEncoding dstEncoding = new UnicodeEncoding(); byte[] srcBytes = srcEncoding.GetBytes(string2Encode); byte[] dstBytes = dstEncoding.GetBytes(string2Encode); return dstEncoding.GetString(dstBytes); } When I insert this returned string to table, I don't get correct letters like Ð, d, C, c, C or c. Please, help! :)

    Read the article

  • how to set charset for MySQL in RODBC

    - by lokheart
    I have a data with chinese characters as field names and data, I have imported them from xls to access 2007 and export them to ODBC. Then I use RODBC to read them in R, the field names is OK, but for the data, all of the chinese characters are shown as ?. I have read the RODBC manual and it said: If it is possible to set the DBMS or ODBC driver to communicate in the character set of the R session then this should be done. For example, MySQL can set the communication character set via SQL, e.g. SET NAMES 'utf8'. I guess this is the problem, but how can I provide this command to MySQL via RODBC? Thanks!

    Read the article

  • Best practices for combining Lucene.NET and a relational database?

    - by FlySwat
    I'm working on a project where I will have a LOT of data, and it will be searchable by several forms that are very efficiently expressed as SQL Queries, but it also needs to be searched via natural language processing. My plan is to build an index using Lucene for this form of search. My question is that if I do this, and perform a search, Lucene will then return the ID's of matching documents in the index, I then have to lookup these entities from the relational database. This could be done in two ways (That I can think of so far): N amount of queries (Horrible) Pass all the ID's to a stored procedure at once (Perhaps as a comma delimited parameter). This has the downside of being limited to the max parameter size, and the slow performance of a UDF to split the string into a temporary table. I'm almost tempted to mirror everything into lucenes index, so that I can periodicly generate the index from the backing store, but only need to access it for the frontend. Advice?

    Read the article

  • Error in retrieving data from Excel File

    - by Sreejesh Kumar
    I have an excel file. I wanted to pull the data from excel file to SQL Server table. And the data is successfully transferred.In the excel file, I removed a text from one column named "Risk" from one row.The text was lengthy one.now the package execution fails at the source ie from the excel file. The errors are shown as "[Audit [1]] Error: There was an error with output column "Risk" (100) on output "Excel Source Output" (9). The column status returned was: "DBSTATUS_UNAVAILABLE"." and "[Audit [1]] Error: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "output column "Risk" (100)" failed because error code 0xC0209071 occurred, and the error row disposition on "output column "Risk" (100)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure." the error occurs only when I remove this particular text from this row.

    Read the article

  • Tutorials for .NET database app

    - by ChrisC
    My earlier question and comments are at "What is ADO.NET". This shows my level of knowledge about c# database apps. Can someone point me to tutorials and/or primers that can help me proceed from where I am (also stated in other question)? I've looked and all I've found are tutorials that talk about general db basics (ie, not helping with VS/C#), or talk about connecting to existing SQL db's. I need help setting one up and configuring it, as well as help on how to create and use queries to test the db schema.

    Read the article

  • Entity Framework Custom T4 Template

    - by s7orm
    Hello, I am using the Entity Framework 4.0 and I am trying to extend my POCO classes generated by the standart T4 template with some custom properties. The classes which are generated by default from EF (without T4) contain 2 properties for every navigation property - NavigationPropertyId and navigationPropertyReference. What I am trying to do is basically extend the generated POCO class with an "Id" property of the foreign key object. I know that I can do that by editing the POCO t4 template. However I have no idea how I can populate/persist the property - I guess I have to somehow extend/modify the method than converts the object to a sql query and vice versa, but I have no idea where to start. Can anyone help? Edit: I just found out, that the EF team has planned to implement something like this - http://blogs.msdn.com/efdesign/archive/2010/03/10/poco-template-code-generation-options.aspx (see Basic POCO with/without Fixup), however this can take ages, so I rather implement it myself.

    Read the article

  • Transactions not working for SubSonic under Oracle?

    - by Fervelas
    The following code sample works perfectly under SQL Server 2005: using (TransactionScope ts = new TransactionScope()) { using (SharedDbConnectionScope scope = new SharedDbConnectionScope()) { MyTable t = new MyTable(); t.Name = "Test"; t.Comments = "Comments 123"; t.Save(); ts.Complete(); } } But under Oracle 10g it throws a "ORA-02089: COMMIT is not allowed in a subordinate session" error. If I only execute the code inside the SharedDbConnectionScope block then everything works OK, but obviously I won't be able to execute operations under a transaction, thus risking data corruption. This is only a small sample of what my real application does. I'm not sure as to what may be causing this behavior; anyone out there care to shed some light on this issue please? Many thanks in advance.

    Read the article

< Previous Page | 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588  | Next Page >