Search Results

Search found 21589 results on 864 pages for 'primary key'.

Page 727/864 | < Previous Page | 723 724 725 726 727 728 729 730 731 732 733 734  | Next Page >

  • feeling insufficient while looking for a programming job ...

    - by user325661
    Hello everybody It's really dream to be programmer for me.So i always wanted to be .I had so small knowledges from commodore basic which i couldn't figure out anything in it .Then i tried to learn Visual basic 6 by myself and result wasn't good as i expected because i even didn't understand classes.Then i left to learning programming and to dream to be programmer. But challenge hadn't finished yet . After all of these i started to learn c# and i did chess engine which i hadn't believe that i can create one.I sometimes feel that i understand many thing about programming like knowing classes ,inheritance,abstract class and methods why exist , interfaces ,extension methods ,static classes ,threads. but i am not expert all in it . i just know that as i need . i don't know still i will learn many thing about these.i have also learned html, css2 , php and database concepts a little sql ,tables relations between primary keys etc but i haven't used them in praticaly . i just did some samples so i feel i am lack of knowledges. As a result still i can't evaluate my skills and can't decide what is my level.I just feel myself one step ahead of junior sometimes :) . Still can't decide that it is time to job seeking. While searching job on web i have never seen a junior advertisement. All looking for a good experiencing one.Nobody care about juniors. When found job advertisement which i feel sufficient myself a little then i start to feel that i think i can't do what they want from me and loosing job after have it in short time would leave many crap feelings into my low self-confidence. Please advise me something ... Also i want to ask some of concentrate questions. 1) If i enter a job which i can't provide their expectations should be in employers responsibility to test me while apply the job ? 2) If i answer "yes , i can " questions which is abstract(for example: can u do something like this) would be my responsibility ? Because such abstract questions is not clear and can't know before start it what i really can or not . 3) i have no professional experiences in this job so i even don't know how teams working on projects but a friend of me said that programmers only write method bodies while seniors create all project.So it looks what i can do really easy but i feel that small companies doesn't work like this so it is better work in a big company for starting who has seniors? 4)last applied job was looking for c# . net developer but then i learn that they need .net web developer.Does it take long time to learn specific sides of web programming while knowing c# desktop programming ? Thanks for all answers since now.

    Read the article

  • Selecting one row when working with typed datasets.

    - by Wodzu
    I have a typed dataset in my project. I would like to populate a datatable with only one row instead of all rows. The selected row must be based on the primary key column. I know I could modify the designer code to achive this functionality however if I change the code in the designer I risk that this code will be deleted when I update my datased via designer in the future. So I wanted to alter the SelectCommand not in the designer but just before firing up MyTypedTableAdapter.Fill method. The strange thing is that the designer does not create a SelectCommand! It creates all other commands but not this one. If it would create SelectCommand I could alter it in this way: this.operatorzyTableAdapter.Adapter.SelectCommand.CommandText += " WHERE MyColumn = 1"; It is far from perfection but atleast I would not have to modify the designer's work. unfortunately as I said earlier the SelectCommand is not created. Instead designer creates something like this: [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] private void InitCommandCollection() { this._commandCollection = new global::System.Data.SqlClient.SqlCommand[1]; this._commandCollection[0] = new global::System.Data.SqlClient.SqlCommand(); this._commandCollection[0].Connection = this.Connection; this._commandCollection[0].CommandText = "SELECT Ope_OpeID, Ope_Kod, Ope_Haslo, Ope_Imie, Ope_Nazwisko FROM dbo.Operatorzy"; this._commandCollection[0].CommandType = global::System.Data.CommandType.Text; } It doesn't make sense in my opinion. Why to create UpdateCommand, InsertCommand and DeleteCommand but do not create SelectCommand? I could bear with this but this._commandCollection is private so I cannot acces it outside of the class code. I don't know how to get into this collection without changing the designer's code. The idea which I have is to expose the collection via partial class definition. However I want to introduce many typed datasets and I really don't want to create partial class definition for each of them. Please note that I am using .NET 3.5. I've found this article about accessing private properties but it concerns .NET 4.0 Thanks for your time.

    Read the article

  • Override Linq-to-Sql Datetime.ToString() Default Convert Values

    - by snmcdonald
    Is it possible to override the default CONVERT style? I would like the default CONVERT function to always return ISO8601 style 126. Steps To Reproduce: DROP TABLE DATES; CREATE TABLE DATES ( ID INT IDENTITY(1,1) PRIMARY KEY, MYDATE DATETIME DEFAULT(GETUTCDATE()) ); INSERT INTO DATES DEFAULT VALUES; INSERT INTO DATES DEFAULT VALUES; INSERT INTO DATES DEFAULT VALUES; INSERT INTO DATES DEFAULT VALUES; SELECT CONVERT(NVARCHAR,MYDATE) AS CONVERTED, CONVERT(NVARCHAR(4000),MYDATE,126) AS ISO, MYDATE FROM DATES WHERE MYDATE LIKE'Feb%' Output: CONVERTED ISO MYDATE --------------------------- ---------------------------- ----------------------- Feb 8 2011 12:17AM 2011-02-08T00:17:03.040 2011-02-08 00:17:03.040 Feb 8 2011 12:17AM 2011-02-08T00:17:03.040 2011-02-08 00:17:03.040 Feb 8 2011 12:17AM 2011-02-08T00:17:03.040 2011-02-08 00:17:03.040 Feb 8 2011 12:17AM 2011-02-08T00:17:03.040 2011-02-08 00:17:03.040 Linq-to-Sql calls CONVERT(NVARCHAR,@p) when I cast ToString(). However, I am displaying all my data in the ISO8601 format. I would like to override the database default if possible to CONVERT(NVARCHAR,@p,126). I am using Dynamic Linq-to-Sql as demoed by ScottGu to process my data. PropertyInfo piField = typeof(T).GetProperty(rule.field); if (piField != null) { Type typeField = piField.PropertyType; if (typeField.IsGenericType && typeField.GetGenericTypeDefinition().Equals(typeof(Nullable<>))) { filter = filter .Select(x => x) .Where(string.Format("{0} != null", rule.field)) .Where(string.Format("{0}.Value.ToString().Contains(\"{1}\")", rule.field, rule.data)); } else { filter = filter .Select(x => x) .Where(string.Format("{0} != null", rule.field)) .Where(string.Format("{0}.ToString().Contains(\"{1}\")", rule.field, rule.data)); } } I was hoping my property would convert the expression from CONVERT(NVARCHAR,@p) to CONVERT(NVARCHAR,@p,126), however I get a NotSupportedException: ... has no supported translation to SQL. public string IsoDate { get { if (SUBMIT_DATE.HasValue) { return SUBMIT_DATE.Value.ToString("o"); } else { return string.Empty; } } }

    Read the article

  • Python: Script works, but seems to deadlock after some time

    - by sberry2A
    I have the following script, which is working for the most part Link to PasteBin The script's job is to start a number of threads which in turn each start a subprocess with Popen. The output from each subprocess is as follows: 1 2 3 . . . n Done Bascially the subprocess is transferring 10M records from tables in one database to different tables in another db with a lot of data massaging/manipulation in between because of the different schemas. If the subprocess fails at any time in it's execution (bad records, duplicate primary keys, etc), or it completes successfully, it will output "Done\n". If there are no more records to select against for transfer then it will output "NO DATA\n" My intent was to create my script "tableTransfer.py" which would spawn a number of these processes, read their output, and in turn output information such as number of updates completed, time remaining, time elapsed, and number of transfers per second. I started running the process last night and checked in this morning to see it had deadlocked. There were not subprocceses running, there are still records to be updated, and the script had not exited. It was simply sitting there, no longer outputting the current information because no subprocces were running to update the total number complete which is what controls updates to the output. This is running on OS X. I am looking for three things: I would like to get rid of the possibility of this deadlock occurring so I don't need to check in on it as frequently. Is there some issue with locking? Am I doing this in a bad way (gThreading variable to control looping of spawning additional thread... etc.) I would appreciate some suggestions for improving my overall methodology. How should I handle ctrl-c exit? Right now I need to kill the process, but assume I should be able to use the signal module or other to catch the signal and kill the threads, is that right? I am not sure whether I should be pasting my entire script here, since I usually just paste snippets. Let me know if I should paste it here as well.

    Read the article

  • Stored procedure error when use computed column for ID

    - by Hari
    I got the error: Procedure or function usp_User_Info3 has too many arguments specified When I run the program. I don't know the error in SP or in C# code. I have to display the Vendor_ID after the user clicks the submit button. Where the thing going wrong here ? Table structure : CREATE TABLE User_Info3 ( SNo int Identity (2000,1) , Vendor_ID AS 'VEN' + CAST(SNo as varchar(16)) PERSISTED PRIMARY KEY, UserName VARCHAR(16) NOT NULL, User_Password VARCHAR(12) NOT NULL, User_ConPassword VARCHAR(12) NOT NULL, User_FirstName VARCHAR(25) NOT NULL, User_LastName VARCHAR(25) SPARSE NULL, User_Title VARCHAR(35) NOT NULL, User_EMail VARCHAR(35) NOT NULL, User_PhoneNo VARCHAR(14) NOT NULL, User_MobileNo VARCHAR(14)NOT NULL, User_FaxNo VARCHAR(14)NOT NULL, UserReg_Date DATE DEFAULT GETDATE() ) Stored Procedure : ALTER PROCEDURE [dbo].[usp_User_Info3] @SNo INT OUTPUT, @Vendor_ID VARCHAR(10) OUTPUT, @UserName VARCHAR(30), @User_Password VARCHAR(12), @User_ConPassword VARCHAR(12), @User_FirstName VARCHAR(25), @User_LastName VARCHAR(25), @User_Title VARCHAR(35), @User_OtherEmail VARCHAR(30), @User_PhoneNo VARCHAR(14), @User_MobileNo VARCHAR(14), @User_FaxNo VARCHAR(14) AS BEGIN SET NOCOUNT ON; INSERT INTO User_Info3 (UserName,User_Password,User_ConPassword,User_FirstName, User_LastName,User_Title,User_OtherEmail,User_PhoneNo,User_MobileNo,User_FaxNo) VALUES (@UserName,@User_Password,@User_ConPassword,@User_FirstName,@User_LastName, @User_Title,@User_OtherEmail,@User_PhoneNo,@User_MobileNo,@User_FaxNo) SET @SNo = Scope_Identity() SELECT Vendor_ID From User_Info3 WHERE SNo = @SNo END C# Code : protected void BtnUserNext_Click(object sender, EventArgs e) { cmd.CommandType = CommandType.StoredProcedure; cmd.CommandText = "usp_User_Info3"; System.Data.SqlClient.SqlParameter SNo=cmd.Parameters.Add("@SNo",System.Data.SqlDbType.Int); System.Data.SqlClient.SqlParameter Vendor_ID=cmd.Parameters.Add("@Vendor_ID", System.Data.SqlDbType.VarChar,10); cmd.Parameters.Add("@UserName", SqlDbType.VarChar).Value = txtUserName.Text; cmd.Parameters.Add("@User_Password", SqlDbType.VarChar).Value = txtRegPassword.Text; cmd.Parameters.Add("@User_ConPassword", SqlDbType.VarChar).Value = txtRegConPassword.Text; cmd.Parameters.Add("@User_FirstName", SqlDbType.VarChar).Value = txtRegFName.Text; cmd.Parameters.Add("@User_LastName", SqlDbType.VarChar).Value = txtRegLName.Text; cmd.Parameters.Add("@User_Title", SqlDbType.VarChar).Value = txtRegTitle.Text; cmd.Parameters.Add("@User_OtherEmail", SqlDbType.VarChar).Value = txtOtherEmail.Text; cmd.Parameters.Add("@User_PhoneNo", SqlDbType.VarChar).Value =txtRegTelephone.Text; cmd.Parameters.Add("@User_MobileNo", SqlDbType.VarChar).Value =txtRegMobile.Text; cmd.Parameters.Add("@User_FaxNo", SqlDbType.VarChar).Value =txtRegFax.Text; cmd.Connection = SqlCon; try { Vendor_ID.Direction = System.Data.ParameterDirection.Output; SqlCon.Open(); cmd.ExecuteNonQuery(); string VendorID = cmd.ExecuteScalar() as string; } catch (Exception ex) { throw new Exception(ex.Message); } finally { string url = "../CompanyBasicInfo.aspx?Parameter=" + Server.UrlEncode(" + VendorID + "); SqlCon.Close(); } }

    Read the article

  • How to manage multiple versions of the same record

    - by Darvis Lombardo
    I am doing short-term contract work for a company that is trying to implement a check-in/check-out type of workflow for their database records. Here's how it should work... 1) A user creates a new entity within the application. There are about 20 related tables that will be populated in addition to the main entity table. 2) Once the entity is created the user will mark it as the master. 3) Another user can make changes to the master only by "checking out" the entity. Multiple users can checkout the entity at the same time. 4) Once the user has made all the necessary changes to the entity, they put it in a "needs approval" status. 5) After an authorized user reviews the entity, they can promote it to master which will put the original record in a tombstoned status. The way they are currently accomplishing the "check out" is by duplicating the entity records in all the tables. The primary keys include EntityID + EntityDate, so they duplicate the entity records in all related tables with the same EntityID and an updated EntityDate and give it a status of "checked out". When the record is put into the next state (needs approval), the duplication occurs again. Eventually it will be promoted to master at which time the final record is marked as master and the original master is marked as dead. This design seems hideous to me, but I understand why they've done it. When someone looks up an entity from within the application, they need to see all current versions of that entity. This was a very straightforward way for making that happen. But the fact that they are representing the same entity multiple times within the same table(s) doesn't sit well with me, nor does the fact that they are duplicating EVERY piece of data rather than only storing deltas. I would be interested in hearing your reaction to the design, whether positive or negative. I would also be grateful for any resoures you can point me to that might be useful for seeing how someone else has implemented such a mechanism. Thanks! Darvis

    Read the article

  • LINQ Query Returning Multiple Copies Of First Result

    - by Mike G
    I'm trying to figure out why a simple query in LINQ is returning odd results. I have a view defined in the database. It basically brings together several other tables and does some data munging. It really isn't anything special except for the fact that it deals with a large data set and can be a bit slow. I want to query this view based on a long. Two sample queries below show different queries to this view. var la = Runtime.OmsEntityContext.Positions.Where(p => p.AccountNumber == 12345678).ToList(); var deDa = Runtime.OmsEntityContext.Positions.Where(p => p.AccountNumber == 12345678).Select(p => new { p.AccountNumber, p.SecurityNumber, p.CUSIP }).ToList(); The first one should hand back a List. The second one will be a list of anonymous objects. When I do these queries in entities framework the first one will hand me back a list of results where they're all exactly the same. The second query will hand me back data where the account number is the one that I queried and the other values differ. This seems to do this on a per account number basis, ie if I were to query for one account number or another all the Position objects for one account would have the same value (the first one in the list of Positions for that account) and the second account would have a set of Position objects that all had the same value (again, the first one in it's list of Position objects). I can write SQL that is in effect the same as either of the two EF queries. They both come back with results (say four) that show the correct data, one account number with different securities numbers. Why does this happen??? Is there something that I could be doing wrong so that if I had four results for the first query above that the first record's data also appears in the 2-4th's objects??? I cannot fathom what would/could be causing this. I've searched Google for all kinds of keywords and haven't seen anyone with this issue. We partial class out the Positions class for added functionality (smart object) and some smart properties. There are even some constructors that provide some view model type support. None of this is invoked in the request (I'm 99% sure of this). However, we do this same pattern all over the app. The only thing I can think of is that the mapping in the EDMX is screwy. Is there a way that this would happen if the "primary keys" in the EDMX were not in fact unique given the way the view is constructed? I'm thinking that the dev who imported this model into the EDMX let the designer auto select what would be unique. Any help would give a haggered dev some hope!

    Read the article

  • Audio/video streaming on Windows platform

    - by bushtucker
    I'm building an interactive language learning application to be used in a classroom environment. The idea is that a teacher should be able to talk to the students (=audio stream to all students), let students talk to each other (= audio P2P) in groups of two or more, let students watch a video coming from a the DVD player or coming from a media server. It should be possible to save the audio/video streams. The teacher should also be able to monitor, take-over or block the desktop of the students. The platform is Windows and it's a desktop application, no web application. The audio delay should be as minimal as poosible. Optionally a student sitting at home should be supported, but it's not a high priority. I am now finished with the classroom control part of the application (login, monitor, block, ...) and want to start the audio and video part. I've been evaluating several options like DirectX, GStreamer and SIP but now I have to make a decision. DirectX seems an obvious choice for the Windows platform, but it only lets me capture and playback audio and video. The encoding/decoding/network part I should do myself. GStreamer contains all kinds of options to capture/encode/stream/save audio and video streams. I've experimented a bit with it (ossbuild) and it does seem to involve a lot of trial and error to make something work: - microphone capture (via directsoundsrc) produces cracking noises on some computers - rtpL16 payloader didn't work well - streaming raw audio over the network only working at a sampling rate of 8000, no higher - there are a lot of errors when receiving mpeg4 video (bad I-frame), on some computers worse than others It is my impression that gstreamer is primary targetted at linux platforms. Development and support for the Windows platform seems to be a little behind. Nevertheless it's a powerful framework that could save me months and years of work. SIP seems to be able to do everything I want, but it is targeted towards telephony and IM. I don't know how flexible SIP is. It seems to me that the SIP layer would just be overhead as I already have a central (teacher) application that can control and setup all the streams. The interesting parts of frameworks like opalvoip and freeswitch are the actual audio/video capture, the encoding and transmission. Does anyone know how these interesting parts relate a framework like gstreamer? Are they easy to integrate into a custom application? Are they flexible enough? Does anyone have experience with all or one of these technologies? Maybe there are even other options I can look at? Many thanks for your advice

    Read the article

  • many-to-many performance concerns with fluent nhibernate.

    - by Ciel
    I have a situation where I have several many-to-many associations. In the upwards of 12 to 15. Reading around I've seen that it's generally believed that many-to-many associations are not 'typical', yet they are the only way I have been able to create the associations appropriate for my case, so I'm not sure how to optimize any further. Here is my basic scenario. class Page { IList<Tag> Tags { get; set; } IList<Modification> Modifications { get; set; } IList<Aspect> Aspects { get; set; } } This is one of my 'core' classes, and coincidentally one of my core tables. Virtually half of the objects in my code can have an IList<Page>, and some of them have IList<T> where T has its own IList<Page>. As you can see, from an object oriented standpoint, this is not really a problem. But from a database standpoint this begins to introduce a lot of junction tables. So far it has worked fine for me, but I am wondering if anyone has any ideas on how I could improve on this structure. I've spent a long time thinking and in order to achieve the appropriate level of association required, I cannot think of any way to improve it. The only thing I have come up with is to make intermediate classes for each object that has an IList<Page>, but that doesn't really do anything that the HasManyToMany does not already do except introduce another class. It does not extend the functionality and, from what I can tell, it does not improve performance. Any thoughts? I am also concerned about Primary Key limits in this scenario. Most everything needs to be able to have these properties, but the Pages cannot be unique to each object, because they are going to be frequently shared and joined between multiple objects. All relationships are one-sided. (That is, a Page has no knowledge of what owns it). Because of this, I also have no Inverse() mapped HasManyToMany collections. Also, I have read the similar question : Usage of ORMs like NHibernate when there are many associations - performance concerns But it really did not answer my concerns.

    Read the article

  • JSON or YAML encoding in GWT/Java on both client and server

    - by KennethJ
    I'm looking for a super simple JSON or YAML library (not particularly bothered which one) written in Java, and can be used in both GWT on the client, and in its original Java form on the server. What I'm trying to do is this: I have my models, which are shared between the client and the server, and these are the primary source of data interchange. I want to design the web service in between to be as simple as possible, and decided to take the RESTful approach. My problem is that I know our application will grow substantially in the future, and writing all the getters, setters, serialization, factories, etc. by hand fills me with absolute dread. So in order to avoid it, I decided to implement annotations to keep track of attributes on the models. The reason I can't just serialize everything directly, using GWT's own one, or one which works through reflection, is because we need a certain amount of logic going on in the serialization process. I.e. whether references to other models get serialized during the serialization of the original model, or whether an ID is just passed, and general simple things like that. I've then written an annotation processor to preprocess my shared models and generate an implementing class with all the getters, setters, serialization, lazy-loading, etc. To make a long story short, I need some type of simple YAML or JSON library, which allows me to encode and decode manually, so I can generate this code through my annotation processor. I have had a look around the interwebs, but every single one I ran into supported some reflection which, while all fine and dandy, make it pretty much useless for GWT. And in the case of GWT's own JSON library, it uses JSNI for speed purposes, making it useless server side. One solution I did think about involved writing writing two sets of serialization methods on the models, one for the client and one for the server, but I'd rather not do that. Also, I'm pretty new to GWT, and even though I have done a lot of Java, it was back in the 1.2 days, so it's a bit rusty. So if you think I'm going about this problem completely the wrong way, I'm open to suggestions.

    Read the article

  • How to fine tune FluentNHibernate's auto mapper?

    - by Venemo
    Okay, so yesterday I managed to get the latest trunk builds of NHibernate and FluentNHibernate to work with my latest little project. (I'm working on a bug tracking application.) I created a nice data access layer using the Repository pattern. I decided that my entities are nothing special, and also that with the current maturity of ORMs, I don't want to hand-craft the database. So, I chose to use FluentNHibernate's auto mapping feature with NHibernate's "hbm2ddl.auto" property set to "create". It really works like a charm. I put the NHibernate configuration in my app domain's config file, set it up, and started playing with it. (For the time being, I created some unit tests only.) It created all tables in the database, and everything I need for it. It even mapped my many-to-many relationships correctly. However, there are a few small glitches: All of the columns created in the DB allow null. I understand that it can't predict which properties should allow null and which shouldn't, but at least I'd like to tell it that it should allow null only for those types for which null makes sense in .NET (eg. non-nullable value types shouldn't allow null). All of the nvarchar and varbinary columns it created, have a default length of 255. I would prefer to have them on max instead of that. Is there a way to tell the auto mapper about the two simple rules above? If the answer is no, will it work correctly if I modify the tables it created? (So, if I set some columns not to allow null, and change the allowed length for some other, will it correctly work with them?) EDIT: I managed to achieve the above by using Fluent NHibernate's convention API. Thanks to everyone who helped! However, there is one more thing: after checking out the convention API, I really would like my IDs to be calld "ID", not "Id", but it seems to me that the PrimaryKey.Name.Is(x => "ID") is not working at all. If I add it to the conventions collection and rewrite my entities' properties to "ID" instead of "Id", it throws an exception that there is no primary key mapped. Any thoughts on this?

    Read the article

  • Large number of UPDATE queries slowing down page

    - by Bryan Lewis
    I am reading and validating large fixed-width text files (range from 10-50K lines) that are submitted via our ASP.net website (coded in VB.Net). I do an initial scan of the file to check for basic issues (line length, etc). Then I import each row into a MS SQL table. Each DB rows basically consists of a record_ID (Primary, auto-incrementing) and about 50 varchar fields. After the insert is done, I run a validation function on the file that checks each field in each row based on a bunch of criteria (trimmed length, isnumeric, range checks, etc). If it finds an error in any field, it inserts a record into the Errors table, which has an error_ID, the record_ID and an error message. In addition, if the field fails in a particular way, I have to do a "reset" on that field. A reset might consist of blanking the entire field, or simply replacing the value with another value (e.g. replacing the string with a new one that has all illegals chars taken out). I have a 5,000 line test file. The upload, initial check, and import takes about 5-6 seconds. The detailed error check and insert into the Errors table takes about 5-8 seconds (this file has about 1200 errors in it). However, the "resets" part takes about 40-45 seconds for 750 fields that need to be reset. When I comment out the resets function (returning immediately without actually calling the UPDATE stored proc), the process is very fast. With the resets turned on, the pages take 50 seconds to return. My UPDATE stored proc is using some recommended code from http://sommarskog.se/dynamic_sql.html, whereby it uses CASE instead of dynamic SQL: UPDATE dbo.Records SET dbo.Records.file_ID = CASE @field_name WHEN 'file_ID' THEN @field_value ELSE file_ID END, . . (all 50 varchar field CASE statements here) . WHERE dbo.Records.record_ID = @record_ID Is there any way I can help my performance here. Can I somehow group all of these UPDATE calls into a single transaction? Should I be reworking the UPDATE query somehow? Or is it just sheer quantity of 750+ UPDATEs and things are just slow (it's a quad proc server with 8GB ram). Any suggestions appreciated.

    Read the article

  • how to create a system-wide independent universal counter object primarily for Database keys?

    - by andora
    I would like to create/use a system-wide independent universal 'counter object' that can be called via COM in a thread-safe manner. The counter object will be passed an ID to identify which counter to return, handle the counting, 'persist' the count (occasionally), have reasonable performance (as fast as possible) perhaps capable of 1000 counts per second or better (1mS) and be accessible cross-process/out-of-process. The current count status must be persisted between object restarts/shutdowns. The counter object is liklely to be a 'singleton' type object implemented in some form of free-threaded dictionary, containing maybe 10 counters (perhaps 50 max). The count needs to be monotonic and consistent, (ie: guaranteed unique sequential values). Each counter should have a few methods, like reset, inc, dec, set, clear, remove. As a luxury, I would like to have a variable-increment (ie: 'step by' value). To support thread-safefty, perhaps some sorm of critical-section or mutex call. It just needs to return a long/4byte signed integer. I really want something that can be called from anywhere, including VBScript, so I figure COM is my preferred solution. The primary use of this is for database keys. I am unable to use autoinc or guid type keys and have ruled out database-generated counting systems at this point. I've spent days researching this and I have really struggled to find a solution. The best I can find is a free-threaded dictionary object that can be instantiated using COM+ from Motobit - it seems to offer all the 'basics' and I guess I could create some form of wrapper for this. So, here are my questions: Does such a 'general purpose counter-object already exist? Can you direct me to it? (MS did do an IIS/ASP object called 'MSWC.Counter' but this isn't 'cross-process'/ out-of-process component and isn't thread-safe. (but if it was, it would do!) What is the best way of creating such a Component? (I'd prefer VB6 right-now, [don't ask!] but can do in VB.NET2005 if I had to). I don't have the skills/knowledge/tools to use anything else. I am desparate for a workable solution. I need specific guidance! If anybody can code something up for me I am prepared to pay for it.

    Read the article

  • A better way to delete a list of elements from multiple tables

    - by manyxcxi
    I know this looks like a 'please write the code' request, but some basic pointer/principles for doing this the right way should be enough to get me going. I have the following stored procedure: CREATE PROCEDURE `TAA`.`runClean` (IN idlist varchar(1000)) BEGIN DECLARE EXIT HANDLER FOR NOT FOUND ROLLBACK; DECLARE EXIT HANDLER FOR SQLEXCEPTION ROLLBACK; DECLARE EXIT HANDLER FOR SQLWARNING ROLLBACK; START TRANSACTION; DELETE FROM RunningReports WHERE run_id IN (idlist); DELETE FROM TMD_INDATA_INVOICE WHERE run_id IN (idlist); DELETE FROM TMD_INDATA_LINE WHERE run_id IN (idlist); DELETE FROM TMD_OUTDATA_INVOICE WHERE run_id IN (idlist); DELETE FROM TMD_OUTDATA_LINE WHERE run_id IN (idlist); DELETE FROM TMD_TEST WHERE run_id IN (idlist); DELETE FROM RunHistory WHERE id IN (idlist); COMMIT; END $$ It is called by a PHP script to clean out old run history. It is not particularly efficient as you can see and I would like to speed it up. The PHP script gathers the ids to remove from the tables with the following query: $query = "SELECT id, stop_time FROM RunHistory WHERE config_id = $configId AND save = 0 AND NOT(stop_time IS NULL) ORDER BY stop_time"; It keeps the last five run entries and deletes all the rest. So using this query to bring back all the IDs, it determines which ones to delete and keeps the 'newest' five. After gathering the IDs it sends them to the stored procedure to remove them from the associated tables. I'm not very good with SQL, but I ASSUME that using an IN statement and not joining these tables together is probably the least efficient way I can do this, but I don't know enough to ask anything but "how do I do this better?" If possible, I would like to do this all in my stored procedure using a query to gather all the IDs except for the five 'newest', then delete them. Another twist, run entries can be marked save (save = 1) and should not be deleted. The RunHistory table looks like this: CREATE TABLE `TAA`.`RunHistory` ( `id` int(11) NOT NULL auto_increment, `start_time` datetime default NULL, `stop_time` datetime default NULL, `config_id` int(11) NOT NULL, [...] `save` tinyint(1) NOT NULL default '0', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;

    Read the article

  • using CSS to center FLOATED input elements wrapped in a DIV

    - by Tim
    There's no shortage of questions and answers about centering but I've not been able to get it to work given my specific circumstances, which involve floating. I want to center a container DIV that contains three floated input elements (split-button, text, checkbox), so that when my page is resized wider, they go from this: ||.....[ ][v] [ ] [ ] label .....|| to this ||......................[ ][v] [ ] [ ] label.......................|| They float fine, but when the page is made wider, they stay to the left: ||.....[ ][v] [ ] [ ] label .......................................|| If I remove the float so that the input elements are stacked rather than side-by-side: [ ][v] [ ] [ ] label then they DO center correctly when the page is resized. SO it is the float being applied to the elements of the DIV#hbox inside the container that is messing up the centering. Is what I want to do impossible because of the way float is designed to work? Here is my DOCTYPE, and the markup does validate at w3c: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> Here is my markup: <div id="term1-container"> <div class="hbox"> <div> <button id="operator1" class="operator-split-button">equals</button> <button id="operator1drop">show all operators</button> </div> <div><input type="text" id="term1"></input></div> <div><input type="checkbox" id="meta2"></input><label for="meta2" class="tinylabel">meta</label></div> </div> </div> And here's the (not-working) CSS: #term1-container {text-align: center} .hbox {margin: 0 auto;} .hbox div {float:left; } I have also tried applying display: inline-block to the floated button, text-input, and checkbox; and even though I think it applies only to text, I've also tried applying white-space: nowrap to the #term1-container DIV, based on posts I've seen here on SO. And just to be a little more complete, here's the jQuery that creates the split-button: $(".operator-split-button").button().click( function() { alert( "foo" ); }).next().button( { text: false, icons: { primary: "ui-icon-triangle-1-s" } }).click( function(){positionOperatorsMenu();} ) })

    Read the article

  • How to properly preload images, js and css files?

    - by Kenny Bones
    Hi, I'm creating a website from scratch and I was really into this in the late 90's but the web has changed alot since then! And I'm more of a designer so when I started putting this site together, I basically did a system of php includes to make the site more "dynamic" When you first visit the site, you'll be presented to a logon screen, if you're not already logged on (cookies). If you're not logged on, a page called access.php is introdused. I thought I'd preload the most heavy images at this point. So that when the user is done logging on, the images are already cached. And this is working as I want. But I still notice that the biggest image still isn't rendered immediatly anyway. So it's seems kinda pointless. All of this has made me rethink how the site is structured and how scripts and css files are loaded. Using FireBug and YSlow with Firefox I see a few pointers like expires headers and reducing the size of each script. But is this really the culprit? For example, would this be really really stupid in the main index.php? The entire site is basically structured like this <?php require("dbconnect.php"); ?> <?php include ("head.php"); ?> And below this is basically just the body and the content of the site. Head.php however consists of the doctype, head portions, linking of two css style sheets, jQuery library, jQuery validation engine, Cufon and Cufon font file, and then the small Cufon.Replace snippet. The rest of the body comes with the index.php file, but at the bottom of this again is an include of a file called "footer.php" which basically consists of loading of a couple of jsLoader scripts and a slidepanel and then a js function. All of this makes the end page source look like a typical complete webpage, but I'm wondering if any of you can see immediatly that "this is really really stupid" and "don't do that, do this instead" etc. :) Are includes a bad way to go? This site is also pretty image intensive and I can probably do a little more optimization. But I don't think that's its the primary culprit. YSlow gives me a report of what takes up the most space: doc(1) - 5.8K js(5) - 198.7K css(2) - 5.6K cssimage(8) - 634.7K image(6) - 110.8K I know it looks like it's cssimage(8) that weighs the most, but I've already preloaded these images from before and it doesn't really affect the rendering.

    Read the article

  • Gotchas INSERTing into SQLite on Android?

    - by paul.meier
    Hi friends, I'm trying to set up a simple SQLite database in Android, handling the schema via a subclass of SQLiteOpenHelper. However, when I query my tables, the columns I think I've inserted are never present. Namely, in SQLiteOpenHelper's onCreate(SQLiteDatabase db) method, I use db.execSQL() to run CREATE TABLE commands, then have tried both db.execSQL and db.insert() to run INSERT commands on the tables I've just created. This appears to run fine, but when I try to query them I always get 0 rows returned (for debugging, the queries I'm running are simple SELECT * FROM table and checking the Cursor's getCount()). Anybody run into anything like this before? These commands seem to run on command-line sqlite3. Are they're gotchas that I'm missing (e.g. INSERTS must/must not be semicolon terminated, or some issue involving multiple tables)? I've attached some of the code below. Thanks for your time, and let me know if I can clarify further. @Override public void onCreate(SQLiteDatabase db) { db.execSQL("CREATE TABLE "+ LEVEL_TABLE +" (" + " "+ _ID +" INTEGER PRIMARY KEY AUTOINCREMENT," + " level TEXT NOT NULL,"+ " rows INTEGER NOT NULL,"+ " cols INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ DYNAMICS_TABLE +" (" + " level_id INTEGER NOT NULL," + " row INTEGER NOT NULL,"+ " col INTEGER NOT NULL,"+ " type INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ SCORE_TABLE +" (" + " level_id INTEGER NOT NULL," + " score INTEGER NOT NULL,"+ " date_achieved DATE NOT NULL,"+ " name TEXT NOT NULL);"); this.enterFirstLevel(db); } And a sample of the insert code I'm currently using, which gets called in enterFirstLevel() (some values hard-coded just to get it running...): private void insertDynamic(SQLiteDatabase db, int row, int col, int type) { ContentValues values = new ContentValues(); values.put("level_id", "1"); values.put("row", Integer.toString(row)); values.put("col", Integer.toString(col)); values.put("type", Integer.toString(type)); db.insertOrThrow(DYNAMICS_TABLE, "col", values); } Finally, query code looks like this: private Cursor fetchLevelDynamics(int id) { SQLiteDatabase db = this.leveldata.getReadableDatabase(); try { String fetchQuery = "SELECT * FROM " + DYNAMICS_TABLE; String[] queryArgs = new String[0]; Cursor cursor = db.rawQuery(fetchQuery, queryArgs); Activity activity = (Activity) this.context; activity.startManagingCursor(cursor); return cursor; } finally { db.close(); } }

    Read the article

  • Methodology for a Rails app

    - by Aaron Vegh
    I'm undertaking a rather large conversion from a legacy database-driven Windows app to a Rails app. Because of the large number of forms and database tables involved, I want to make sure I've got the right methodology before getting too far. My chief concern is minimizing the amount of code I have to write. There are many models that interact together, and I want to make sure I'm using them correctly. Here's a simplified set of models: class Patient < ActiveRecord::Base has_many :PatientAddresses has_many :PatientFileStatuses end class PatientAddress < ActiveRecord::Base belongs_to :Patient end class PatientFileStatus < ActiveRecord::Base belongs_to :Patient end The controller determines if there's a Patient selected; everything else is based on that. In the view, I will be needing data from each of these models. But it seems like I have to write an instance variable in my controller for every attribute that I want to use. So I start writing code like this: @patient = Patient.find(session[:patient]) @patient_addresses = @patient.PatientAddresses @patient_file_statuses = @patient.PatientFileStatuses @enrollment_received_when = @patient_file_statuses[0].EnrollmentReceivedWhen @consent_received = @patient_file_statuses[0].ConsentReceived @consent_received_when = @patient_file_statuses[0].ConsentReceivedWhen The first three lines grab the Patient model and its relations. The next three lines are examples of my providing values to the view from one of those relations. The view has a combination of text fields and select fields to show the data above. For example: <%= select("patientfilestatus", "ConsentReceived", {"val1"="val1", "val2"="val2", "Written"="Written"}, :include_blank=true )% <%= calendar_date_select_tag "patient_file_statuses[EnrollmentReceivedWhen]", @enrollment_complete_when, :popup=:force % (BTW, the select tag isn't really working; I think I have to use collection_select?) My questions are: Do I have to manually declare the value of every instance variable in the controller, or can/should I do it within the view? What is the proper technique for displaying a select tag for data that's not the primary model? When I go to save changes to this form, will I have to manually pick out the attributes for each model and save them individually? Or is there a way to name the fields such that ActiveRecord does the right thing? Thanks in advance, Aaron.

    Read the article

  • What makes great software?

    - by VirtuosiMedia
    From the perspective of an end user, what makes a software great rather than just good or functional? What are some fundamental principles that can shift the way a software is used and perceived? What are some of the little finishing touches that help put an application over the top? I'm in the later stages of developing a web app and I'm looking for ideas or concepts that I may have missed. If you have specific examples of software or apps that you absolutely love, please share the reasons or features that make it special. Keep in mind that I'm looking for examples that directly affect the end user, but not necessarily just UI suggestions. Here are some of the principles and little touches I'm trying to use: Keep the UI as simple as possible. Remove absolutely everything that isn't necessary. Use progressive disclosure when more information can be needed sometimes but isn't needed all the time. Provide inline help and useful error messages. Verbs on buttons wherever possible. Make anything that's clickable obvious. Fast, responsive UI. Accessibility (this is a work in progress). Reusable UI patterns. Once a user learns a skill, they will be able to use it in multiple places. Intelligent default settings. Auto-focusing forms when filling out the form is the primary action to be taken on the page. Clear metaphors (like tabs) and headings indicating location within the app. Automating repetitive tasks (with the ability to disable the automation). Use standardized or accepted metaphors for icons (like an "x" for delete). Larger text sizes for improved readability. High contrast so that each section is distinct. Making sure that it's obvious on every page what the user is supposed to do by establishing a clear information hierarchy and drawing the eye to the call to action. Most deletions can be undone. Discoverability - Make it easy to learn how to do new tasks. Group similar elements together.

    Read the article

  • How do I write sql data into a textbox after a submit type event

    - by Matt
    Finishing up some homework and Im having trouble with figuring out how to take information generated in sql column(a primary key set up to assign a record number to a customer example 1046) at submit and writing it to my redirected recipt page. I call it recipt.aspx. Any takers Professor says to use a datareader...but things go bad after that. public partial class _Default : System.Web.UI.Page { String cnStr = "EDITED FOR THE PURPOSE OF NOT DISPLAYED SQL SERVERta Source=111.11.111.11; uid=xxxxxxx; password=xxxx; database=xxxxxx; "; String insertStr; SqlDataReader reader; SqlConnection myConnection = new SqlConnection(); protected void submitbutton_Click(object sender, EventArgs e) { myConnection.ConnectionString = cnStr; try { //more magic happens as myConnection opens myConnection.Open(); insertStr = "insert into connectAssignment values ('" + TextBox2.Text + "','" + TextBox3.Text + "','" + TextBox4.Text + "','" + TextBox5.Text + "','" + bigtextthing.Text + "','" + DropDownList1.SelectedItem.Value + "')"; //magic happens as Connection string is assigned to connection object and passes in the SQL statment //associate the command to the the myConnection connection object SqlCommand cmd = new SqlCommand(insertStr, myConnection); cmd.ExecuteNonQuery(); Session["passmyvalue1"] = TextBox2.Text; Session["passmyvalue2"] = TextBox3.Text; Session["passmyvalue3"] = TextBox4.Text; Session["passmyvalue4"] = TextBox5.Text; Session["passmyvalue5"] = bigtextthing.Text; Session["I NEED SOME HELP RIGHT HERE"] =Textbox6.Text; Response.Redirect("receipt.aspx"); } catch { bigtextthing.Text = "Error submitting" + "Possible casues: Internet is down,server is down, check your settings!"; } finally { myConnection.Close(); TextBox2.Text = ""; TextBox3.Text = ""; TextBox4.Text = ""; TextBox5.Text = ""; bigtextthing.Text = ""; } //reset validators? } The recipt page public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if(Session["passmyvalue1"] != null) { TextBox1.Text = (string)Session["passmyvalue1"]; TextBox2.Text = (string)Session["passmyvalue2"]; TextBox3.Text = (string)Session["passmyvalue3"]; TextBox4.Text = (string)Session["passmyvalue4"]; TextBox5.Text = (string)Session["passmyvalue5"]; TextBox6.Text = I don't know ; } } } THanks for the help

    Read the article

  • openDatabase Hello World

    - by cf_PhillipSenn
    I'm trying to learn about openDatabase, and I think this I'm getting it to INSERT INTO TABLE1, but I can't verify that the SELECT * FROM TABLE1 is working. <html> <head> <script src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load("jquery", "1"); </script> <script type="text/javascript"> var db; $(function(){ db = openDatabase('HelloWorld'); db.transaction( function(transaction) { transaction.executeSql( 'CREATE TABLE IF NOT EXISTS Table1 ' + ' (TableID INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, ' + ' Field1 TEXT NOT NULL );' ); } ); db.transaction( function(transaction) { transaction.executeSql( 'SELECT * FROM Table1;',function (transaction, result) { for (var i=0; i < result.rows.length; i++) { alert('1'); $('body').append(result.rows.item(i)); } }, errorHandler ); } ); $('form').submit(function() { var xxx = $('#xxx').val(); db.transaction( function(transaction) { transaction.executeSql( 'INSERT INTO Table1 (Field1) VALUES (?);', [xxx], function(){ alert('Saved!'); }, errorHandler ); } ); return false; }); }); function errorHandler(transaction, error) { alert('Oops. Error was '+error.message+' (Code '+error.code+')'); transaction.executeSql('INSERT INTO errors (code, message) VALUES (?, ?);', [error.code, error.message]); return false; } </script> </head> <body> <form method="post"> <input name="xxx" id="xxx" /> <p> <input type="submit" name="OK" /> </p> <a href="http://www.google.com">Cancel</a> </form> </body> </html>

    Read the article

  • Generating Unordered List with PHP + CodeIgniter from a MySQL Database

    - by Tim
    Hello Everyone, I am trying to build a dynamically generated unordered list in the following format using PHP. I am using CodeIgniter but it can just be normal php. This is the end output I need to achieve. <ul id="categories" class="menu"> <li rel="1"> Arts &amp; Humanities <ul> <li rel="2"> Photography <ul> <li rel="3"> 3D </li> <li rel="4"> Digital </li> </ul> </li> <li rel="5"> History </li> <li rel="6"> Literature </li> </ul> </li> <li rel="7"> Business &amp; Economy </li> <li rel="8"> Computers &amp; Internet </li> <li rel="9"> Education </li> <li rel="11"> Entertainment <ul> <li rel="12"> Movies </li> <li rel="13"> TV Shows </li> <li rel="14"> Music </li> <li rel="15"> Humor </li> </ul> </li> <li rel="10"> Health </li> And here is my SQL that I have to work with. -- -- Table structure for table `categories` -- CREATE TABLE IF NOT EXISTS `categories` ( `id` mediumint(8) NOT NULL auto_increment, `dd_id` mediumint(8) NOT NULL, `parent_id` mediumint(8) NOT NULL, `cat_name` varchar(256) NOT NULL, `cat_order` smallint(4) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; So I know that I am going to need at least 1 foreach loop to generate the first level of categories. What I don't know is how to iterate inside each loop and check for parents and do that in a dynamic way so that there could be an endless tree of children. Thanks for any help you can offer. Tim

    Read the article

  • What database table structure should I use for versions, codebases, deployables?

    - by Zac Thompson
    I'm having doubts about my table structure, and I wonder if there is a better approach. I've got a little database for version control repositories (e.g. SVN), the packages (e.g. Linux RPMs) built therefrom, and the versions (e.g. 1.2.3-4) thereof. A given repository might produce no packages, or several, but if there are more than one for a given repository then a particular version for that repository will indicate a single "tag" of the codebase. A particular version "string" might be used to tag a version of the source code in more than one repository, but there may be no relationship between "1.0" for two different repos. So if packages P and Q both come from repo R, then P 1.0 and Q 1.0 are both built from the 1.0 tag of repo R. But if package X comes from repo Y, then X 1.0 has no relationship to P 1.0. In my (simplified) model, I have the following tables (the x_id columns are auto-incrementing surrogate keys; you can pretend I'm using a different primary key if you wish, it's not really important): repository - repository_id - repository_name (unique) ... version - version_id - version_string (unique for a particular repository) - repository_id ... package - package_id - package_name (unique) - repository_id ... This makes it easy for me to see, for example, what are valid versions of a given package: I can join with the version table using the repository_id. However, suppose I would like to add some information to this database, e.g., to indicate which package versions have been approved for release. I certainly need a new table: package_version - version_id - package_id - package_version_released ... Again, the nature of the keys that I use are not really important to my problem, and you can imagine that the data column is "promotion_level" or something if that helps. My doubts arise when I realize that there's really a very close relationship between the version_id and the package_id in my new table ... they must share the same repository_id. Only a small subset of package/version combinations are valid. So I should have some kind of constraint on those columns, enforcing that ... ... I don't know, it just feels off, somehow. Like I'm including somehow more information than I really need? I don't know how to explain my hesitance here. I can't figure out which (if any) normal form I'm violating, but I also can't find an example of a schema with this sort of structure ... not being a DBA by profession I'm not sure where to look. So I'm asking: am I just being overly sensitive?

    Read the article

  • basic operations for modifying a source document with XSLT

    - by SpliFF
    All the tutorials and examples I've found of XSLT processing seem to assume your destination will be a significantly different format/structure to your source and that you know the structure of the source in advance. I'm struggling with finding out how to perform simple "in-place" modifications to a HTML document without knowing anything else about its existing structure. Could somebody show me a clear example that, given an arbitrary unknown HTML source will: 1.) delete the classname 'foo' from all divs 2.) delete a node if its empty (ie <p></p>) 3.) delete a <p> node if its first child is <br> 4.) add newattr="newvalue" to all H1 5.) replace 'heading' in text nodes with 'title' 6.) wrap all <u> tags in <b> tags (ie, <u>foo</u> -> <b><u>foo</u></b>) 7.) output the transformed document without changing anything else The above examples are the primary types of transform I wish to accomplish. Understanding how to do the above will go a long way towards helping me build more complex transforms. To help clarify/test the examples here is a sample source and output, however I must reiterate that I want to work with arbitrary samples without rewriting the XSLT for each source: <!doctype html> <html> <body> <h1>heading</h1> <p></p> <p><br>line</p> <div class="foo bar"><u>baz</u></div> <p>untouched</p> </body> </html> output: <!doctype html> <html> <body> <h1 newattr="newvalue">title</h1> <div class="bar"><b><u>baz</u></b></div> <p>untouched</p> </body> </html>

    Read the article

  • How to use a LinkButton inside gridview to delete selected username in the code-behind file?

    - by jenifer
    I have a "UserDetail" table in my "JobPost.mdf". I have a "Gridview1" showing the column from "UserDetail" table,which has a primary key "UserName". This "UserName" is originally saved using Membership class function. Now I add a "Delete" linkbutton to the GridView1. This "Delete" is not autogenerate button,I dragged inside the column itemtemplate from ToolBox. The GridView1's columns now become "Delete_LinkButton"+"UserName"(within the UserDetail table)+"City"(within the UserDetail table)+"IsAdmin"(within the UserDetail table) What I need is that by clicking this "delete_linkButton",it will ONLY delete the entire User Entity on the same row (link by the corresponding "UserName") from the "UserDetail" table,as well as delete all information from the AspNetDB.mdf (User,Membership,UserInRole,etc). I would like to fireup a user confirm,but not mandatory. At least I am trying to make it functional in the correct way. for example: Command UserName City IsAdmin delete ken Los Angles TRUE delete jim Toronto FALSE When I click "delete" on the first row, I need all the record about "ken" inside the "UserDetail" table to be removed. Meanwhile, all the record about "ken" in the AspNetDB.mdf will be gone, including UserinRole table. I am new to asp.net, so I don't know how to pass the commandargument of the "Delete_LinkButton" to the code-behind file LinkButton1_Click(object sender, EventArgs e), because I need one extra parameter "UserName". My partial code is listed below: <asp:TemplateField> <ItemTemplate> <asp:LinkButton ID="Delete_LinkButton" runat="server" onclick="LinkButton1_Click1" CommandArgument='<%# Eval("UserName","{0}") %>'>LinkButton</asp:LinkButton> </ItemTemplate> </asp:TemplateField> protected void Delete_LinkButton_Click(object sender, EventArgs e) { ((LinkButton) GridView1.FindControl("Delete_LinkButton")).Attributes.Add("onclick", "'return confirm('Are you sure you want to delete {0} '" + UserName); Membership.DeleteUser(UserName); JobPostDataContext db = new JobPostDataContext(); var query = from u in db.UserDetails where u.UserName == UserName select u; for (var Item in query) { db.UserDetails.DeleteOnSubmit(Item); } db.SubmitChanges(); } Please do help! Thanks in advance.

    Read the article

< Previous Page | 723 724 725 726 727 728 729 730 731 732 733 734  | Next Page >