Search Results

Search found 6988 results on 280 pages for 'if else statement'.

Page 167/280 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • Avoiding Mixup of Language Details

    - by DarenW
    Today someone asked me what was wrong with their source code. It was obvious. "Use double equals in place of that single equal in that if statement. Um, i think..." As i remember some languages actually take a single equals for comparison. Since i sometimes forget or mix up the syntax details among the several languages i use, i stepped over to my laptop to try a quickie experiment... It costs a bit of time and is a break in the flow to try "quick" experiments (though maybe the practice is good for memory.) What tips do you have for keeping straight in your mind the syntax (and other) details of multiple languages? (And nowadays, this applies just as well to the many wiki-like markups!)

    Read the article

  • C# ProgressBar design pattern

    - by MadSeb
    Hi, I'm working on a database upgrader application. The upgrader updates the schema of a database ( adds new columns, renames columns , adds new tables, new views to an existing database by executing SQL statements ). When a user wants to upgrade from version 1.0 to 2.0 , "Upgrader" objects are taken from an object factory and the "Execute" method of each "Upgrader" is called. The GUI of the application has a progress bar and each time an upgrader object performs a SQL statement the progress bar gets incremented. while (!version.Equal(CurrentVersion)) { IUpgrader myUpgrader = UpgraderFactory.GetUpgrader(version); myUpgrader.Execute(UpgradedFile,progressbar); version.Increment(); } My question is very simple : how should the upgrader object communicate with the progressbar. In the code above, the upgrader object is given direct access to the progressbar but I'm wondering if some better way of doing this or better design pattern exists. Regards, Seb

    Read the article

  • Prevent SQL injection from form-generated SQL.

    - by Markos Fragkakis
    Hi all, I have a search table where user will be able to filter results with a filter of the type: Field [Name], Value [John], Remove Rule Field [Surname], Value [Blake], Remove Rule Field [Has Children], Value [Yes], Remove Rule Add Rule So the user will be able to set an arbitrary set of filters, which will result essentially in a completely dynamic WHERE clause. In the future I will also have to implement more complicated logical expressions, like Where (name=John OR name=Nick) AND (surname=Blake OR surname=Bourne), Of all 10 fields the user may or may not filter by, I don't know how many and which filters the user will set. So, I cannot use a prepared statement (which assumes that at least we know the fields in the WHERE clause). This is why prepared statements are unfortunately out of the question, I have to do it with plain old, generated SQL. What measures can I take to protect the application from SQL Injection (REGEX-wise or any other way)?

    Read the article

  • Drop a DB2 view if it exists...

    - by grenade
    Why doesn't this work in IBM Data Studio (Eclipse): IF EXISTS (SELECT 1 FROM SYSIBM.SYSVIEWS WHERE NAME = 'MYVIEW' AND CREATOR = 'MYSCHEMA') THEN DROP VIEW MYSCHEMA.MYVIEW; END IF; I have a feeling it has to do with statement terminators (;) but I can't find a syntax that works. Another similar question at http://stackoverflow.com/questions/355687/how-to-check-a-procedure-view-table-exists-or-not-before-dropping-it-in-db2-9-1 suggests that they had to create a proc but this isn't a solution for us.

    Read the article

  • Client-side Replication for SQL Server?

    - by Mighty Z
    I'd like to have some degree of fault tolerance / redundancy with my SQL Server Express database. I know that if I upgrade to a pricier version of SQL Server, I can get "Replication" built in. But I'm wondering if anyone has experience in managing replication on the client side. As in, from my application: Every time I need to create, update or delete records from the database -- issue the statement to all n servers directly from the client side Every time I need to read, I can do so from one representative server (other schemes seem possible here, too). It seems like this logic could potentially be added directly to my Linq-To-SQL Data Context. Any thoughts?

    Read the article

  • Insert rows into MySQL table while changing one value

    - by Jonathan
    Hey all- I have a MySQL table that defines values for a specific customer type. | CustomerType | Key Field 1 | Key Field 2 | Value | Each customer type has 10 values associated with it (based on the other two key fields). I am creating a new customer type (TypeB) that is exactly the same as another customer type (TypeA). I want to insert "TypeB" as the CustomerType but then just copy the values from TypeA's rows for the other three fields. Is there an SQL insert statement to make this happen? Somthing like: insert into customers(customer_type, key1, key2, value) values "TypeB" union select key1, key2, value from customers where customer_type = "TypeA" Thanks- Jonathan

    Read the article

  • Why aren't unsigned OpenMP index variables allowed?

    - by Moe
    I have a loop in my C++/OpenMP code that looks like this: #pragma omp parallel for for(unsigned int i=0; i<count; i++) { // do stuff } When I compile it (with Visual Studio 2005) I get the following error: error C3016: 'i' : index variable in OpenMP 'for' statement must have signed integral type I understand that the error occurs because i is unsigned instead of signed, and changing i to be signed removed this error. What I want to know is why is this an error? Why aren't unsigned index variables allowed? Looking at the MSDN page for this error gives me no clues.

    Read the article

  • Assert parameters in a table-valued UDF

    - by Clay Lenhart
    Is there a way to create "asserts" on the parameters of a table-valued UDF. I'd like to use a table-valued UDF for performance reasons, however I know that certain parameter combinations (like start and end dates that are more than a month apart) will cause performance issues on the server for all users. End users query the database via Excel using UDFs. UDFs (and table-valued UDFs in particular) are useful when the data is too large for Excel. Users write simple SQL queries that categorizes the data into groups to reduce the number of rows. For example, the user may be interested in weekly aggregates rather than hourly ones. Users write a group by SELECT statement to reduce the rows by 24x7=168 times. I know I can write RAISERROR statements in multistatement UDFs, but table-valued UDFs are integrated in the query optimizer so these queries are more efficient with table-valued UDFs. So, can I define assertions on the parameters passed to a table-valued UDF?

    Read the article

  • Save entity with entity framework

    - by Michel
    Hi, I'm saving entities/records with the EF, but i'm curious if there is another way of doing it. I receive a class from a MVC controller method, so basicly i have all the info: the class's properties, including the primary key. Without EF i would do a Sql update (update table set a=b, c=d where id = 5), but with EF i got no further than this: Get an object with ID of 5 Update the (existing) object with the new object Submitchanges. What bothers me is that i have to get the object from the database first, where i have all the info to do an update statement. Is there another way of doing this?

    Read the article

  • Declaring more than one SPIM array causes a syntax error

    - by Zack
    Below is the beginning of a chunk of SPIM code: .data a: .space 20 b: .space 20 .text set_all: sw $ra,0($sp) li $t0,0 li $t1,10 ............ Unfortunately, the second array I declare ('b') causes the SPIM interpreter to spit out: spim: (parser) syntax error on line 3 of file spim.out b: .space 20 ^ Similar code works when I only have one array -- it seems to be the second that screws it up. I've prodded at it but can't figure out what it is about that statement that makes it break. Any thoughts? Thanks for any insight.

    Read the article

  • Why is there a limit of max 20 parameters to a clojure function

    - by GuyC
    Hi, there seems to be a limit to the number of parameters a clojure function can take. When defining a function with more than 20 parameters I receive the following: Obviously this can be avoided, but I was hitting this limit porting the execution model of an existing DSL to clojure, and I have constructs in my DSL like the following, which by macro expansion can be mapped to functions quite easily except for this limit: (defAlias nn1 ((element ?e1) (element ?e2)) number "@doc features of the elements are calculated for entry into the first neural network, the result is the score computed by the latter" (nn1-recall (nn1-feature00 ?e1 ?e2) (nn1-feature01 ?e1 ?e2) ... (nn1-feature89 ?e1 ?e2))) which is a DSL statement to call a neural network with 90 input nodes. Can work around it of course, but was wondering where the limit comes from. Thanks.

    Read the article

  • Is Stopwatch really broken?

    - by Jakub Šturc
    At MSDN page for Stopwatch class I discovered link to interesting article which makes following statement about Stopwatch: However there are some serious issues: This can be unreliable on a PC with multiple processors. Due to a bug in the BIOS, Start() and Stop() must be executed on the same processor to get a correct result. This is unreliable on processors that do not have a constant clock speed (most processors can reduce the clock speed to conserve energy). This is explained in detail here. I am little confused. I've seen tons of examples of using Stopwatch and nobody mention this drawbacks. How serious is this? Should I avoid using Stopwatch?

    Read the article

  • using check contraint in MySQL for controlling string length not working

    - by ptrn
    Dear stackoverflow, I'm tumbled with a problem! I've set up my first check constraint using MySQL, but unfortunately I'm having a problem. When inserting a row that should fail the test, the row is inserted anyway. The structure: CREATE TABLE user ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, uname VARCHAR(10) NOT NULL, fname VARCHAR(50) NOT NULL, lname VARCHAR(50) NOT NULL, mail VARCHAR(50) NOT NULL, PRIMARY KEY (id), CHECK (LENGTH(fname) > 3) ); The insert statement: INSERT INTO user VALUES (null, 'user', 'Fname', 'Lname', '[email protected]'); I'm pretty sure I'm missing something basic here.

    Read the article

  • Pulling data from a text file to generate a report

    - by Edmond
    Have a program in Access, using VBA. I need to come up with an If statement to pull data from a text file. The data is a list of procedures and prices. I have to pull the prices from the text file to show in a report how much each procedure costs. ID PID M1 M2 M3 Total 1 11120390(procedure) 2 180(price) 360 180 540 1080(total Price) 3 2 1 3 6(Units sold) 4 5 200(Price) 200 600 800 1600(total price) 6 1 3 4 8(Units Sold) 7 11120390(procedure) The table in the text file is setup like this and I need to Pull the procedure number and the price of each procedure from the text file.

    Read the article

  • python duration of a file object in an argument list

    - by msw
    In the pickle module documentation there is a snippet of example code: reader = pickle.load(open('save.p', 'rb')) which upon first read looked like it would allocate a system file descriptor, read its contents and then "leak" the open descriptor for there isn't any handle accessible to call close() upon. This got me wondering if there was any hidden magic that takes care of this case. Diving into the source, I found in Modules/_fileio.c that file descriptors are closed by the fileio_dealloc() destructor which led to the real question. What is the duration of the file object returned by the example code above? After that statement executes does the object indeed become unreferenced and therefore will the fd be subject to a real close(2) call at some future garbage collection sweep? If so, is the example line good practice, or should one not count on the fd being released thus risking kernel per-process descriptor table exhaustion?

    Read the article

  • Microsoft Sync Framework - Local DB and Remote DB have to have the same schema?

    - by Josh
    When using MSF, is it implied in the technology that the sync tables are supposed to be 1-1? The reason I'm wondering is that if I'm synching from a SQL2005 database to a SQLCE, I might want the CE one to be a little more flattened out so I can get data out with a simpler SELECT statement (as CE does not support sprocs). For example, I might have a tblCustomer, tblOrder, and tblCustomerOrder in the central database, but in the local databases one table with all the data might be preferred. Of course I'd still want the updates to reflect back and forth between the two databases. Does MSF make this possible, or does the local DB have to have the same tables as the central?

    Read the article

  • what's ur idea about this two way for creat a folder in oracle

    - by rima
    According to my last question about how to create folder here I find some codes that s.b before write it! looking : (sorry for limitation i cant put codes here) they try to Create a bat file,by oracle outfile text_IO,file_type then they write these statement! body_of_file = 'Net use x: \\address' body_of_file += 'md' || filename body_of_file += 'start '|| file name then open bat file and write inside it! then they call it by HOST!!!! like: Host('cmd /c \\address\.x.bat host_folder'|| sysdate); but they can easily and directly by calling HOST! and also I dont know why they code just can in oracle 6i!!!! we use 2 oracle 6i and 10g. please would you help me : 1- why this code dont work in 10g? 2- which way is better?create a batch file and create folder or use HOST for run each command?(in my Idea both is same,How about u?)

    Read the article

  • XPath select an attribute based on value

    - by Apeksha
    Using VB.Net, I have an XmlNode object, xNode. I need to select an attribute of this node if it has a particular value. e.g. xNode.SelectSingleNode(".[@attr1='1']") I would expect this statement to return the attribute "attr1", only if it has a value of "1". However, I get an error - Expression must evaluate to a node-set. When I tried this - xNode.SelectSingleNode("@attr1[@attr1='1']") It always returns Nothing, even if the attribute has a value of 1. I have tried a lot of different things, but no luck yet. Please help. Thanks.

    Read the article

  • FMDB transaction

    - by user142764
    Hi ! I use FMDB to wrap SQLite in my app. I haven't found any docs about the use of methods begin, beginUpdates, commit, finalize, etc. I face some problems in my apps which i think are caused by the way i use transactions. Here is what i tried : [FMDB beginUpdates] - My insert statement - [FMDB commit] [FMDB finalize] it crashes with this log : Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[FMDatabase<0xd705a0 finalize]: called when collecting not enabled' Could you please give me an example of how you are using transactions, or point me to a doc ? Thanks in advance, Vincent.

    Read the article

  • Hot deploy not longer working on JBoss

    - by Bernhard V
    Hi! I've got a pretty annoying problem with my JBoss AS 4.2.3 GA. Until recently everything was running fine, but now the hot deploy feature is now longer working. And -- as always -- I don't know what I did to cause this behaviour. My projects are built with Maven. I've cleaned every target directory, installed the projects and then deployed them to the server. So the sources in Eclipse and the deployed projects on the server should be identical. Inside a method I've added a simple System.out.println("test"); statement and -- BANG! -- I get the following error: Do you know a way out of my trouble?

    Read the article

  • Difference between Revert and Update in Mercurial

    - by Edan Maor
    I'm just getting started with Mercurial, and I've come across something which I don't understand. I made changes to several files, and now I want to undo all the changes I made to one of them (i.e. go back to my last commit for one specific file). As far as I can see, the command I want is revert. In the page I linked to, there is the following statement: This operation however does not change the parent revision of the working directory (or revisions in case of an uncommitted merge). To undo an uncomitted merge, you can use "hg update -C -r." which will reset the parents to the first parent. I don't understand the difference between the two (hg revert vs. hg update -C -r). Can anyone enlighten me as to the difference? And in my case, do I really want the revert or the update to go get rid of the changes I made to the file? Thanks

    Read the article

  • Sqlite: CURRENT_TIMESTAMP is in GMT, not the timezone of the machine

    - by BrianH
    I have a sqlite (v3) table with this column definition: "timestamp" DATETIME DEFAULT CURRENT_TIMESTAMP The server that this database lives on is in the CST time zone. When I insert into my table without including the timestamp column, sqlite automatically populates that field with the current timestamp in GMT, not CST. Is there a way to modify my insert statement to force the stored timestamp to be in CST? On the other hand, it is probably better to store it in GMT (in case the database gets moved to a different timezone, for example), so is there a way I can modify my select SQL to convert the stored timestamp to CST when I extract it from the table? Thanks in advance!

    Read the article

  • How do I recieve byteArray sent from a Server and read 4 single bytes at a time.

    - by k80sg
    I need to receive 320 bytes of data from a server which consist of 80 4 byte int fields. How do I receive them in bytes of 4 and display their respective int values? Thanks. Not sure if this is right for the receiving part: //for reading the data from the socket BufferedInputStream bufferinput=new BufferedInputStream(NewSocket.getInputStream()); DataInputStream datainput=new DataInputStream(bufferinput); byte[] handsize=new byte[32]; // The control will halt at the below statement till all the 32 bytes are not read from the socket. datainput.readFully(handsize);

    Read the article

  • Change Large Number of Record Keys using Map Table

    - by Coyote
    I have a set of records indexed by id numbers, I need to convert these record's indexes to a new id number. I have a two column table mapping the old numbers to the new numbers. For example given these two tables, what would the update statement look like? Given: OLD_TO_NEW oldid | newid ----------------- 1234 0987 7698 5645 ... ... and id | data ---------------- 1234 'yo' 7698 'hey' ... ... Need: id | data ---------------- 0987 'yo' 5645 'hey' ... ... This oracle so I have access to PL/SQL, I'm just trying to avoid it.

    Read the article

  • DBLinq not generating where clause

    - by sipwiz
    I'm testing out DBLinq-0.18 and DBLinq from SVN Trunk with MySQL and Postgresql. I'm only using a very simple query but on both database DBLinq is not generating a Where clause. I have confirmed this by turning on statement logging on Postgresql to check exactly what request DBLinq is sending. My Linq query is: MyDB db = new MyDB(new NpgsqlConnection("Database=database;Host=localhost;User Id=postgres;Password=password")); var customers = from customer in db.Customers where customer.CustomerUserName == "test" select customer; The query works ok but the SQL generated by DBLinq is of the form: select customerusername, customerpassword .... from public.customers There is no Where clause which means DBLinq must be pulling the whole table down before running the Linq query. Has anyone had any experience with DBLinq and know what I could be doing wrong?

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >