Search Results

Search found 5521 results on 221 pages for 'deeper understanding'.

Page 178/221 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • rs232 communication, general timing question

    - by Sunny Dee
    Hi, I have a piece of hardware which sends out a byte of data representing a voltage signal at a frequency of 100Hz over the serial port. I want to write a program that will read in the data so I can plot it. I know I need to open the serial port and open an inputstream. But this next part is confusing me and I'm having trouble understanding the process conceptually: I create a while loop that reads in the data from the inputstream 1 byte at a time. How do I get the while loop timing so that there is always a byte available to be read whenever it reaches the readbyte line? I'm guessing that I can't just put a sleep function inside the while loop to try and match it to the hardware sample rate. Is it just a matter of continuing reading the inputstream in the while loop, and if it's too fast then it won't do anything (since there's no new data), and if it's too slow then it will accumulate in the inputstream buffer? Like I said, i'm only trying to understand this conceptually so any guidance would be much appreciated! I'm guessing the idea is independent of which programming language I'm using, but if not, assume it is for use in Java. Thanks!

    Read the article

  • Practical Scheme Programming

    - by Ixmatus
    It's been a few months since I've touched Scheme and decided to implement a command line income partitioner using Scheme. My initial implementation used plain recursion over the continuation, but I figured a continuation would be more appropriate to this type of program. I would appreciate it if anyone (more skilled with Scheme than I) could take a look at this and suggest improvements. I'm that the multiple (display... lines is an ideal opportunity to use a macro as well (I just haven't gotten to macros yet). (define (ab-income) (call/cc (lambda (cc) (let ((out (display "Income: ")) (income (string->number (read-line)))) (cond ((<= income 600) (display (format "Please enter an amount greater than $600.00~n~n")) (cc (ab-income))) (else (let ((bills (* (/ 30 100) income)) (taxes (* (/ 20 100) income)) (savings (* (/ 10 100) income)) (checking (* (/ 40 100) income))) (display (format "~nDeduct for bills:---------------------- $~a~n" (real->decimal-string bills 2))) (display (format "Deduct for taxes:---------------------- $~a~n" (real->decimal-string taxes 2))) (display (format "Deduct for savings:-------------------- $~a~n" (real->decimal-string savings 2))) (display (format "Remainder for checking:---------------- $~a~n" (real->decimal-string checking 2)))))))))) Invoking (ab-income) asks for input and if anything below 600 is provided it (from my understanding) returns (ab-income) at the current-continuation. My first implementation (as I said earlier) used plain-jane recursion. It wasn't bad at all either but I figured every return call to (ab-income) if the value was below 600 kept expanding the function. (please correct me if that apprehension is incorrect!)

    Read the article

  • Which LINQ expression is faster

    - by Vlad Bezden
    Hi All In following code public class Person { public string Name { get; set; } public uint Age { get; set; } public Person(string name, uint age) { Name = name; Age = age; } } void Main() { var data = new List<Person>{ new Person("Bill Gates", 55), new Person("Steve Ballmer", 54), new Person("Steve Jobs", 55), new Person("Scott Gu", 35)}; // 1st approach data.Where (x => x.Age > 40).ToList().ForEach(x => x.Age++); // 2nd approach data.ForEach(x => { if (x.Age > 40) x.Age++; }); data.ForEach(x => Console.WriteLine(x)); } in my understanding 2nd approach should be faster since it iterates through each item once and first approach is running 2 times: Where clause ForEach on subset of items from where clause. However internally it might be that compiler translates 1st approach to the 2nd approach anyway and they will have the same performance. Any suggestions or ideas? I could do profiling like suggested, but I want to understand what is going on compiler level if those to lines of code are the same to the compiler, or compiler will treat it literally. Thanks in advance for your help.

    Read the article

  • What is the relationship between Turing Machine & Modern Computer ?

    - by smwikipedia
    I heard a lot that modern computers are based on Turing machine. I just cannot build a bridge from a conceptual Turing Machine to a real modern computer. Could someone help me build this bridge? Below is my current understanding. I think the computer is a big general-purpose Turing machine. Each program we write is a small specific-purpose Turing machine. The classical Turing machine do its job based on the input and its current state inside and so do our programs. Let's take a running program (a process) as an example. We know that in the process's address space, there's areas for stack, heap, and code. A classical Turing machine doesn't have the ability to remember many things, so we borrow the concept of stack from the push-down automaton. The heap and stack areas contains the state of our specific-purpose Turing machine (our program). The code area represents the logic of this small Turing machine. And various I/O devices supply input to this Turing machine.

    Read the article

  • trying to draw scaled UIImage in custom view, but nothing's rendering

    - by Ben Collins
    I've created a custom view class and right now just want to draw an image scaled to fit the view, given a UIImage. I tried just drawing the UIImage.CGImage, but as others have attested to on this site (and in the docs), that renders the image upside down. So, at the suggestion of an answer I found to another question, I'm trying to draw it directly, but nothing is rendering in the view and I'm not sure why. Here's my drawing code: - (void)drawRect:(CGRect)rect { // Drawing code [super drawRect:rect]; if (self.originalImage) { [self drawImage]; } } - (void) drawImage { if (CGSizeEqualToSize(originalImage.size, self.frame.size) == NO) { CGFloat scaleFactor = 1.0; CGFloat scaledWidth = 0.0; CGFloat scaledHeight = 0.0; CGPoint thumbPoint = CGPointMake(0.0, 0.0); CGFloat widthFactor = self.frame.size.width / originalImage.size.width; CGFloat heightFactor = self.frame.size.height / originalImage.size.height; if (widthFactor < heightFactor) { scaleFactor = widthFactor; } else { scaleFactor = heightFactor; } scaledWidth = originalImage.size.width * scaleFactor; scaledHeight = originalImage.size.height * scaleFactor; if (widthFactor < heightFactor) { thumbPoint.y = (self.frame.size.height - scaledHeight) * 0.5; } else if (widthFactor > heightFactor) { thumbPoint.x = (self.frame.size.width - scaledWidth) * 0.5; } UIGraphicsBeginImageContext(self.frame.size); CGRect thumbRect = CGRectZero; thumbRect.origin = thumbPoint; thumbRect.size.width = scaledWidth; thumbRect.size.height = scaledHeight; [originalImage drawInRect:thumbRect]; self.scaledImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); } else { self.scaledImage = originalImage; } } My understanding (after studying this a bit) is that the UIGraphicsBeginImageContext function creates an offscreen for me to draw into, so now how do I render that context on top of the original one?

    Read the article

  • Google's Oauth for Installed apps vs. Oauth for Web Apps

    - by burgerguy
    So I'm having trouble understanding something... If you do Oauth for Web Apps, you register your site with a callback URL and get a unique consumer secret key. But once you've obtained an Oauth for Web Apps token, you don't have to generate Oauth calls to the google server from your registered domain. I regularly use my key and token from scripts running via an apache server at localhost on my laptop and Google never says "you're not sending this request from the registered domain." It just sends me the data. Now, as I understand it, if you do Oauth for Installed Apps, you use "anonymous" instead of a secret key you got from Google. I've been thinking of just using the OAuth for Web Apps auth method, then passing that token to an installed app that has my secret code embedded in its innards. The worry is that the code could be discovered by bad people. But what's more secure... making them work for the secret code or letting them default to anonymous? What really goes bad if the "secret" is discovered when the alternative is using "anonymous" as the secret?

    Read the article

  • SQL Server Clustered Index: (Physical) Data Page Order

    - by scherand
    I am struggling understanding what a clustered index in SQL Server 2005 is. I read the MSDN article Clustered Index Structures (among other things) but I am still unsure if I understand it correctly. The (main) question is: what happens if I insert a row (with a "low" key) into a table with a clustered index? The above mentioned MSDN article states: The pages in the data chain and the rows in them are ordered on the value of the clustered index key. And Using Clustered Indexes for example states: For example, if a record is added to the table that is close to the beginning of the sequentially ordered list, any records in the table after that record will need to shift to allow the record to be inserted. Does this mean that if I insert a row with a very "low" key into a table that already contains a gazillion rows literally all rows are physically shifted on disk? I cannot believe that. This would take ages, no? Or is it rather (as I suspect) that there are two scenarios depending on how "full" the first data page is. A) If the page has enough free space to accommodate the record it is placed into the existing data page and data might be (physically) reordered within that page. B) If the page does not have enough free space for the record a new data page would be created (anywhere on the disk!) and "linked" to the front of the leaf level of the B-Tree? This would then mean the "physical order" of the data is restricted to the "page level" (i.e. within a data page) but not to the pages residing on consecutive blocks on the physical hard drive. The data pages are then just linked together in the correct order. Or formulated in an alternative way: if SQL Server needs to read the first N rows of a table that has a clustered index it can read data pages sequentially (following the links) but these pages are not (necessarily) block wise in sequence on disk (so the disk head has to move "randomly"). How close am I? :)

    Read the article

  • What's the best practice to setup testing for ASP.Net MVC? What to use/process/etc?

    - by melaos
    hi there, i'm trying to learn how to properly setup testing for an ASP.Net MVC. and from what i've been reading here and there thus far, the definition of legacy code kind of piques my interests, where it mentions that legacy codes are any codes without unit tests. so i did my project in a hurry not having the time to properly setup unit tests for the app and i'm still learning how to properly do TDD and unit testing at the same time. then i came upon selenium IDE/RC and was using it to test on the browser end. it was during that time too that i came upon the concept of integration testing, so from my understanding it seems that unit testing should be done to define the test and basic assumptions of each function, and if the function is dependent on something else, that something else needs to be mocked so that the tests is always singular and can be run fast. Questions: so am i right to say that the project should have started with unit test with proper mocks using something like rhino mocks. then anything else which requires 3rd party dll, database data access etc to be done via integration testing using selenium? because i have a function which calls a third party dll, i'm not sure whether to write a unit test in nunit to just instantiate the object and pass it some dummy data which breaks the mocking part to test it or just cover that part in my selenium integration testing when i submit my forms and call the dll. and for user acceptance tests, is it safe to say we can just use selenium again? Am i missing something or is there a better way/framework? i'm trying to put in more tests for regression testing, and to ensure that nothing breaks when we put in new features. i also like the idea of TDD because it helps to better define the function, sort of like a meta documentation. thanks!! hope this question isn't too subjective because i need it for my case.

    Read the article

  • What is happening in this T-SQL code?

    - by Ben McCormack
    I'm just starting to learn T-SQL and could use some help in understanding what's going on in a particular block of code. I modified some code in an answer I received in a previous question, and here is the code in question: DECLARE @column_list AS varchar(max) SELECT @column_list = COALESCE(@column_list, ',') + 'SUM(Case When Sku2=' + CONVERT(varchar, Sku2) + ' Then Quantity Else 0 End) As [' + CONVERT(varchar, Sku2) + ' - ' + Convert(varchar,Description) +'],' FROM OrderDetailDeliveryReview Inner Join InvMast on SKU2 = SKU and LocationTypeID=4 GROUP BY Sku2 , Description ORDER BY Sku2 Set @column_list = Left(@column_list,Len(@column_list)-1) Select @column_list ---------------------------------------- 1 row is returned: ,SUM(Case When Sku2=157 Then Quantity Else 0 End) As [157 -..., SUM(Case ... The T-SQL code does exactly what I want, which is to make a single result based on the results of a query, which will then be used in another query. However, I can't figure out how the SELECT @column_list =... statement is putting multiple values into a single string of characters by being inside a SELECT statement. Without the assignment to @column_list, the SELECT statement would simply return multiple rows. How is it that by having the variable within the SELECT statement that the results get "flattened" down into one value? How should I read this T-SQL to properly understand what's going on?

    Read the article

  • GAE - Getting TypeError requiring class instance be passed to class's own method...

    - by Spencer Leland
    I'm really new to programming... I set up a class to give supporting information for Google's User API user object. I store this info in the datastore using db.model. When I call the okstatus method of my user_info class using this code: elif user_info.okstatus(user): self.response.out.write("user allowed") I get this error: unbound method okstatus() must be called with user_info instance as first argument (got User instance instead) Here is my user_info class. class user_info: def auth_ctrlr(self, user): if self.status(user) == status_allowed: return ("<a href=\"%s\">Sign Out</a>)" % (users.create_login_url("/"))) else: return ("<a href=\"%s\">Sign In or Get an Account</a>)" % (users.create_logout_url("/"))) def status(self, user): match = sub_user.gql(qu_by_user_id, user.user_id) return match.string_status def group(self, user): match = sub_user.gql(qu_by_user_id, user.user_id) grp = group_names.gql(qu_by_user_id, match.groupID) return grp def okstatus(self, user): match = self.status(user) if match == status_allowed: return True My understanding is that the argument "self" inside the method's calling arguments describes it as a child to the class. I've tried everything I can think of and can't find any related info online. Can someone please tell me what I'm doing wrong? Thanks

    Read the article

  • How to mimic built-in .NET serialization idioms?

    - by Matt Enright
    I have a library (written in C#) for which I need to read/write representations of my objects to disk (or to any Stream) in a particular binary format (to ensure compatibility with C/Java library implementations). The format requires a fair amount of bit-packing and some DEFLATE'd bytestreams. I would like my library, however, to be as idiomatic .NET as possible, however, and so would like to provide an API as close as possible to the normal binary serialization process. I'm aware of the ability to implement the IFormatter interface, but being that I really am unable to reuse any part of the built-in serialization stack, is it worth doing this, or will it just bring unnecessary overhead. In other words: Implement IFormatter and co. OR Just provide "Serialize"/"Deserialize" methods that act on a Stream? A good point brought up below about needing the serialization semantics for any case involving Remoting. In a case where using MarshalByRef objects is feasible, I'm pretty sure that this won't be an issue, so leaving that aside are there any benefits or drawbacks to using the ISerializable/IFormatter versus a custom stack (or, is my understanding remoting incorrectly)?

    Read the article

  • How to get raw jpeg data (but no metatags / proprietary markers)

    - by CoolAJ86
    I want to get raw jpeg data - no metadata. I'm very confused looking at the jpeg "standards". How correct is my understanding of the marker "tree"? 0xFFD8 - Identifies the file as an image 0xFFE? - EXIF, JFIF, SPIFF, ICC, etc 0x???? - the length of the tag 0xFFD8 - Start of Image 0xFFE0 - should follow SOI as per spec, but often preceded by comments ??? 0x???? - Matrices, tags, random data ??? There are never other markers in-between these markers? Or these include the matrices and such? 0xFFDA - Start of Stream - This is what I want, yes? 0xXXXX - length of stream 0xFFD9 - End of Stream (EOI) 0x???? - Comment tags, extra exif, jfif info??? 0xFFD9 - End of Image 0xFF00 - escaped 0xFF, not to be confused with a marker This has been my reading material: http://en.wikipedia.org/wiki/JPEG http://www.sno.phy.queensu.ca/~phil/exiftool/TagNames/JPEG.html http://www.media.mit.edu/pia/Research/deepview/exif.html http://www.faqs.org/faqs/jpeg-faq/part1/section-15.html

    Read the article

  • Finding intersection of two spheres

    - by Onkar Deshpande
    Hi, Consider the following problem - I am given 2 links of length L0 and L1. P0 is the point that the first link starts at and P1 is the point that I want the end of second link to be at in 3-D space. I am supposed to write a function that should take in these 3-D points (P0 and P1) as inputs and should find all configurations of the links that put the second link's end point at P1. My understanding of how to go about it is - Each link L0 and L1 will create a sphere S0 and S1 around itself. I should find out the intersection of those two spheres (which will be a circle) and print all points that are on the circumference of that circle. I saw gmatt's first reply on the http://stackoverflow.com/questions/1406375/finding-intersection-points-between-3-spheres but could not understand it properly since the images did not show up. I also saw a formula for finding out the intersection at mathworld[dot]wolfram[dot]com/Sphere-SphereIntersection[dot]html . I could find the radius of intersection by the method given on mathworld. Also I can find the center of that circle and then use the parametric equation of circle to find the points. The only doubt that I have is will this method work for the points P0 and P1 mentioned above ? Please comment and let me know your thoughts.

    Read the article

  • Getter and Setter vs. Builder strategy

    - by Extrakun
    I was reading a JavaWorld's article on Getter and Setter where the basic premise is that getters expose internal content of an object, hence tightening coupling, and go on to provide examples using builder objects. I was rather leery of abolishing getter/setter but on second reading of the article, see to quite like the idea. However, sometimes I just need one cruical element of an entity class, such as the user's id and writing one whole class just to extract that cruical element seems like overkill. It also implies that for different view, a different type of importer/exporter must be implemented (or the whole data of the class to be exported out, thus resulting in waste). Usually I tend towards filtering the result of a getter - for example, if I need to output the price of a product in different currency, I would code it as: return CurrencyOutput::convertTo($product->price(), 'USD'); This is with the understanding that the raw output of a getter is not necessary the final result to be pushed onto a screen or a database. Is getter/setter really as bad as it is protrayed to be? When should one adopt a builder strategy, or a 'get the result and filter it' approach? How do you avoid having a class needing to know about every other objects if you are not using getter/setter?

    Read the article

  • Javascript scope chain

    - by Geromey
    Hi, I am trying to optimize my program. I think I understand the basics of closure. I am confused about the scope chain though. I know that in general you want a low scope (to access variables quickly). Say I have the following object: var my_object = (function(){ //private variables var a_private = 0; return{ //public //public variables a_public : 1, //public methods some_public : function(){ debugger; alert(this.a_public); alert(a_private); }; }; })(); My understanding is that if I am in the some_public method I can access the private variables faster than the public ones. Is this correct? My confusion comes with the scope level of this. When the code is stopped at debugger, firebug shows the public variable inside the this keyword. The this word is not inside a scope level. How fast is accessing this? Right now I am storing any this.properties as another local variable to avoid accessing it multiple times. Thanks very much!

    Read the article

  • What should every developer know about databases?

    - by Aaronaught
    Whether we like it or not, many if not most of us developers either regularly work with databases or may have to work with one someday. And considering the amount of misuse and abuse in the wild, and the volume of database-related questions that come up every day, it's fair to say that there are certain concepts that developers should know - even if they don't design or work with databases today. So: What are the important concepts that developers and other software professionals ought to know about databases? Guidelines for Responses: Keep your list short. One concept per answer is best. Be specific. "Data modelling" may be an important skill, but what does that mean precisely? Explain your rationale. Why is your concept important? Don't just say "use indexes." Don't fall into "best practices." Convince your audience to go learn more. Upvote answers you agree with. Read other people's answers first. One high-ranked answer is a more effective statement than two low-ranked ones. If you have more to add, either add a comment or reference the original. Don't downvote something just because it doesn't apply to you personally. We all work in different domains. The objective here is to provide direction for database novices to gain a well-founded, well-rounded understanding of database design and database-driven development, not to compete for the title of most-important.

    Read the article

  • java 2D and swing

    - by user384706
    Hi, I have trouble understanding a fundamental concept in Java 2D. To give a specific example: One can customize a swing component via implementing it's own version of the method paintComponent(Graphics g) Graphics is available to the body of the method. Question: What is exactly this Graphics object, I mean how it is related to the object that has the method paintComponent? Ok, I understand that you can do something like: g.setColor(Color.GRAY); g.fillOval(0, 0, getWidth(), getHeight()); To get a gray oval painted. What I can not understand is how is the Graphics object related to the component and the canvas. How is this drawing actually done? Another example: public class MyComponent extends JComponent { protected void paintComponent(Graphics g) { System.out.println("Width:"+getWidth()+", Height:"+getHeight()); } public static void main(String args[]) { JFrame f = new JFrame("Some frame"); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); f.setSize(200, 90); MyComponent component = new MyComponent (); f.add(component); f.setVisible(true); } } This prints Width:184, Height:52 What does this size mean? I have not added anything to the frame of size(200,90). Any help on this is highly welcome Thanks

    Read the article

  • WCF and streaming requests and responses

    - by Cheeso
    Is it correct that in WCF, I cannot have a service write to a stream that is received by the client? My understanding is that streaming is supported in WCF for requests, responses, or both. Is it true that in all cases, the receiver of the stream must invoke Read ? I would like to support a scenario where the receiver of the stream can Write on it. Is this supported? Let me show it this way. The simplest example of Streaming in WCF is the service returning a FileStream to a client. This is a streamed response. The server code is like this: [ServiceContract] public interface IStreamService { [OperationContract] Stream GetData(string fileName); } public class StreamService : IStreamService { public Stream GetData(string filename) { FileStream fs = new FileStream(filename, FileMode.Open) return fs; } } And the client code is like this: StreamDemo.StreamServiceClient client = new WcfStreamDemoClient.StreamDemo.StreamServiceClient(); Stream str = client.GetData(@"c:\path\to\myfile.dat"); do { b = str.ReadByte(); //read next byte from stream ... } while (b != -1); (example taken from http://blog.joachim.at/?p=33) Clear, right? The server returns the Stream to the client, and the client invokes Read on it. Is it possible for the client to provide a Stream, and the server to invoke Write on it? In other words, rather than a pull model - where the client pulls data from the server - it is a push model, where the client provides the "sink" stream and the server writes into it. Is this possible in WCF, and if so, how? What are the config settings required for the binding, interface, etc? The analogy is the Response.OutputStream from an ASP.NET request. In ASPNET, any page can invoke Write on the output stream, and the content is received by the client. Can I do something similar in WCF? Thanks.

    Read the article

  • Threads, Sockets, and Designing Low-Latency, High Concurrency Servers

    - by lazyconfabulator
    I've been thinking a lot lately about low-latency, high concurrency servers. Specifically, http servers. http servers (fast ones, anyway) can serve thousands of users simultaneously, with very little latency. So how do they do it? As near as I can tell, they all use events. Cherokee and Lighttpd use libevent. Nginx uses it's own event library performing much the same function of libevent, that is, picking a platform optimal strategy for polling events (like kqueue on *bsd, epoll on linux, /dev/poll on Solaris, etc). They all also seem to employ a strategy of multiprocess or multithread once the connection is made - using worker threads to handle the more cpu intensive tasks while another thread continues to listen and handle connections (via events). This is the extent of my understanding and ability to grok the thousand line sources of these applications. What I really want are finer details about how this all works. In examples of using events I've seen (and written) the events are handling both input and output. To this end, do the workers employ some sort of input/output queue to the event handling thread? Or are these worker threads handling their own input and output? I imagine a fixed amount of worker threads are spawned, and connections are lined up and served on demand, but how does the event thread feed these connections to the workers? I've read about FIFO queues and circular buffers, but I've yet to see any implementations to work from. Are there any? Do any use compare-and-swap instructions to avoid locking or is locking less detrimental to event polling than I think? Or have I misread the design entirely? Ultimately, I'd like to take enough away to improve some of my own event-driven network services. Bonus points to anyone providing solid implementation details (especially for stuff like low-latency queues) in C, as that's the language my network services are written in.

    Read the article

  • Types issue in F#

    - by Andry
    Hello! In my ongoing adventure deep diving into f# I am understanding a lot of this powerful language but there are things that I still do not understand so clearly. One of the most important issues I need to master is types. Well the book I am reading is very straight forward and introduces entities and main functionalities with a direct approach. The first thing I could get start with is types. It introduces the main types as list, option, tuples, and so on... It is clearly underlined that all these types are IMMUTABLE for many reasons regarding functional programming and data consistance in functional programing. Well, no problems until now... But now I am getting started with Concrete Types... Well... I have problems in managing with types like list, option, tuples, types created through new operator and concrete types created using type keyword (for abbreviations, concrete types...). So my question is: how can I efficently catalogue/distinguish all types of data in f#???? I can create a perfect separation among types in C#, VB.NET... FOr example in VB.NET there are value and reference types while in C# there are only references and also int, double are treated as objects (they are objects while in VB.NET a value type is not a object and there is a split in types for this reason). Well in F# I cannot create such differences among types in the language. Can you help me? I hope I was clear.

    Read the article

  • Binding a date string parameter in an MS Access PDO query

    - by harryg
    I've made a PDO database class which I use to run queries on an MS Access database. When querying using a date condition, as is common in SQL, dates are passed as a string. Access usually expects the date to be surrounded in hashes however. E.g. SELECT transactions.amount FROM transactions WHERE transactions.date = #2013-05-25#; If I where to run this query using PDO I might do the following. //instatiate pdo connection etc... resulting in a $db object $stmt = $db->prepare('SELECT transactions.amount FROM transactions WHERE transactions.date = #:mydate#;'); //prepare the query $stmt->bindValue('mydate', '2013-05-25', PDO::PARAM_STR); //bind the date as a string $stmt->execute(); //run it $result = $stmt->fetch(); //get the results As far as my understanding goes the statement that results from the above would look like this as binding a string results in it being surrounded by quotes: SELECT transactions.amount FROM transactions WHERE transactions.date = #'2013-05-25'#; This causes an error and prevents the statement from running. What's the best way to bind a date string in PDO without causing this error? I'm currently resorting to sprintf-ing the string which I'm sure is bad practise. Edit: if I pass the hash-surrounded date then I still get the error as below: Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[22018]: Invalid character value for cast specification: -3030 [Microsoft][ODBC Microsoft Access Driver] Data type mismatch in criteria expression. (SQLExecute[-3030] at ext\pdo_odbc\odbc_stmt.c:254)' in C:\xampp\htdocs\ips\php\classes.php:49 Stack trace: #0 C:\xampp\htdocs\ips\php\classes.php(49): PDOStatement-execute() #1 C:\xampp\htdocs\ips\php\classes.php(52): database-execute() #2 C:\xampp\htdocs\ips\try2.php(12): database-resultset() #3 {main} thrown in C:\xampp\htdocs\ips\php\classes.php on line 49

    Read the article

  • How does this Switch statement know which case to execute? (PHP/MySQL)

    - by ggfan
    Here is a code form a PHP+MySQL book I am reading and I am having trouble understanding this code. This code is about checking a file uploaded into a database. (Please excuse any spelling errors if any, I was typing it in) Q1: How does it know which case to echo? In the whole code, there is no mention of each case. Q2: Why do they skip case 5?! Or does it not matter which numbers you use(so I can have case 1, case 18, case 2?) if($_FILES['userfile']['error']>0) { echo 'Problem: '; switch($_FILES['userfile']['error']) { case 1: echo 'File exceeded upload_max_filesize'; break; case 2: echo 'File exceeded max_file_size'; break; case 3: echo 'File only partially uploaded'; break; case 4: echo 'No file uploaded'; break; case 6: echo 'Cannot upload file: no temp directory specified'; break; case 7: echo 'Upload failed: Cannot write to disk'; break; } exit; }

    Read the article

  • Covariance and Contravariance inference in C# 4.0

    - by devoured elysium
    When we define our interfaces in C# 4.0, we are allowed to mark each of the generic parameters as in or out. If we try to set a generic parameter as out and that'd lead to a problem, the compiler raises an error, not allowing us to do that. Question: If the compiler has ways of inferring what are valid uses for both covariance (out) and contravariance(in), why do we have to mark interfaces as such? Wouldn't it be enough to just let us define the interfaces as we always did, and when we tried to use them in our client code, raise an error if we tried to use them in an un-safe way? Example: interface MyInterface<out T> { T abracadabra(); } //works OK interface MyInterface2<in T> { T abracadabra(); } //compiler raises an error. //This makes me think that the compiler is cappable //of understanding what situations might generate //run-time problems and then prohibits them. Also, isn't it what Java does in the same situation? From what I recall, you just do something like IMyInterface<? extends whatever> myInterface; //covariance IMyInterface<? super whatever> myInterface2; //contravariance Or am I mixing things? Thanks

    Read the article

  • CLR Stored Procedures

    - by Paul Hatcherian
    In an ASP.NET application, I have a small number of fairly complex, frequently used operations to execute against a database. In these operations, one or more of several tables needs updates or inserts based a logical evaluation of both input parameters and values of certain tables. I've maintained a separation of logic and data access, so the operation currently looks like this: Request received from client Business layer invokes data layer to retrieve data from database Business layer processes result and determines which operation to execute Business layer invokes appropriate data operation Response sent to client As you can see, the client is kept waiting while two separate requests are made to the database. In searching for a solution to this, I've found CLR Stored Procedures, but I'm not sure if I have the right idea about what they are useful for. I have written a replacement for the code above which especially places steps 2-4 in a CLR SP. My understanding is that the SP will be executed locally by SQL Server and result in only one call being made to the server. My initial benchmark tests show this is actually orders of magnitude slower than my original code, but I attribute that recompilation of the code I have not worked out yet and/or some flaw in my environment. My question is basically, is this the intended use of CLR SPs or am I missing something? I realize this is a bit of a compromise structurally, so if there's a better way to do it I'd love to hear it.

    Read the article

  • Authenticating GTK app to run with root permissions

    - by Thomas Tempelmann
    I have a UI app (uses GTK) for Linux that requires to be run as root (it reads and writes /dev/disk*). Instead of requiring the user to open a root shell or use "sudo" manually every time when he launches my app, I wonder if the app can use some OS-provided API to ask the user to relaunch the app with root permissions. (Note: gtk app's can't use "setuid" mode, so that's not an option here.) The advantage here would be an easier workflow: The user could, from his default user account, double click my app from the desktop, and the app then would relaunch itself with root permission after been authenticated by the API/OS. I ask this because OS X offers exactly this: An app can ask the OS to launch an executable with root permissions - the OS (and not the app) then asks the user to input his credentials, verifies them and then launches the target as desired. I wonder if there's something similar for Linux (Ubuntu, e.g.) Update: The app is a remote operated disk repair tool for the unsavvy Linux user, and those Linux noobs won't have much understanding of using sudo or even changing their user's group memberships, especially if their disk just started acting up and they're freaking out. That's why I seek a solution that avoids technicalities like this.

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >