Search Results

Search found 21331 results on 854 pages for 'require once'.

Page 712/854 | < Previous Page | 708 709 710 711 712 713 714 715 716 717 718 719  | Next Page >

  • Getting the ranking of a photo in SQL

    - by Jake Petroules
    I have the following tables: Photos [ PhotoID, CategoryID, ... ] PK [ PhotoID ] Categories [ CategoryID, ... ] PK [ CategoryID ] Votes [ PhotoID, UserID, ... ] PK [ PhotoID, UserID ] A photo belongs to one category. A category may contain many photos. A user may vote once on any photo. A photo can be voted for by many users. I want to select the ranks of a photo (by counting how many votes it has) both overall and within the scope of the category that photo belongs to. The count of SELECT * FROM Votes WHERE PhotoID = @PhotoID being the number of votes a photo has. I want the resulting table to have generated columns for overall rank, and rank within category, so that I may order the results by either. So for example, the resulting table from the query should look like: PhotoID VoteCount RankOverall RankInCategory 1 48 1 7 3 45 2 5 19 33 3 1 2 17 4 3 7 9 5 5 ... ...you get the idea. How can I achieve this? So far I've got the following query to retrieve the vote counts, but I need to generate the ranks as well: SELECT PhotoID, UserID, CategoryID, DateUploaded, (SELECT COUNT(CommentID) AS Expr1 FROM dbo.Comments WHERE (PhotoID = dbo.Photos.PhotoID)) AS CommentCount, (SELECT COUNT(PhotoID) AS Expr1 FROM dbo.PhotoVotes WHERE (PhotoID = dbo.Photos.PhotoID)) AS VoteCount, Comments FROM dbo.Photos

    Read the article

  • Return latitude/longitude based on entered address

    - by Don
    I'm building a php based application for a client to enter in addresses for their customers' buildings. They'd like the ability to view the location on a map (either as individuals or grouped in a city search). What I'm trying to accomplish is a lookup once the address is entered into a form that populates the database, so after they enter in the addresss, city, state, zip (these are all US locations) they could click a "get lat/long info" link/button that would check to make sure the data is complete, then would lookup the address and return the latitude/longitude into the appropriate form fields. Then the form could be submitted to store the info, and I could later just pull the lat/long when plotting on a map. 1) Does this make sense, or would I be better off just doing the lookup when it's time to plot it? 2) Does anyone have any pointers to solve this problem? I've seen some of the Google/Yahoo API's but it looks like this is more based on the plotting a point part. I may be able to modify it to suit my needs, but I'm just trying to cut some research time posting here with the hopes one of you may have a more direct route. I'll RTFM if I have to... Thanks, D.

    Read the article

  • Give a reference to a python instance attribute at class definition

    - by Guenther Jehle
    I have a class with attributes which have a reference to another attribute of this class. See class Device, value1 and value2 holding a reference to interface: class Interface(object): def __init__(self): self.port=None class Value(object): def __init__(self, interface, name): self.interface=interface self.name=name def get(self): return "Getting Value \"%s\" with interface \"%s\""%(self.name, self.interface.port) class Device(object): interface=Interface() value1=Value(interface, name="value1") value2=Value(interface, name="value2") def __init__(self, port): self.interface.port=port if __name__=="__main__": d1=Device("Foo") print d1.value1.get() # >>> Getting Value "value1" with interface "Foo" d2=Device("Bar") print d2.value1.get() # >>> Getting Value "value1" with interface "Bar" print d1.value1.get() # >>> Getting Value "value1" with interface "Bar" The last print is wrong, cause d1 should have the interface "Foo". I know whats going wrong: The line interface=Interface() line is executed, when the class definition is parsed (once). So every Device class has the same instance of interface. I could change the Device class to: class Device(object): interface=Interface() value1=Value(interface, name="value1") value2=Value(interface, name="value2") def __init__(self, port): self.interface=Interface() self.interface.port=port So this is also not working: The values still have the reference to the original interface instance and the self.interface is just another instance... The output now is: >>> Getting Value "value1" with interface "None" >>> Getting Value "value1" with interface "None" >>> Getting Value "value1" with interface "None" So how could I solve this the pythonic way? I could setup a function in the Device class to look for attributes with type Value and reassign them the new interface. Isn't this a common problem with a typical solution for it? Thanks!

    Read the article

  • Php script running as scheduled task hangs - help!

    - by Ali
    Hi guys, I've built a php script that runs from the command line. It opens a connection into a pop3 email account and downloads all the emails and writes them to a database, and deletes them once downloaded. I have this script being called from the commandline by a bat file. in turn I have created a scheduled task which invokes the bat file every 5 minutes. The thing is that I have set the time out to zero for the fact that at times there could be emails with large attachments and the script actually downloads the attachments and stores them as raw files offline and the no timeout is so that the script doesnt die out during downloading. I've found that the program hangs sometimes and its a bit annoying at that - it always hangs are one point i.e. when negotiating the connection and getting connected to the mail server. And because the timeout is set to zero it seems to stay stuck up in taht position. And because of that the task is not run as its technically hung up. I want that the program should not timeout when downloading emails - however at the points where it is negotiating a connection or trying to connect to the mailserver there should be a timeout only at that point itself and not the rest of the program execution. How do I do this :(

    Read the article

  • regular expression for emails NOT ending with replace script

    - by corroded
    I'm currently modifying my regex for this: http://stackoverflow.com/questions/2782031/extracting-email-addresses-in-an-html-block-in-ruby-rails basically, im making another obfuscator that uses ROT13 by parsing a block of text for all links that contain a mailto referrer(using hpricot). One use case this doesn't catch is that if the user just typed in an email address(without turning it into a link via tinymce) So here's the basic flow of my method: 1. parse a block of text for all tags with href="mailto:..." 2. replace each tag with a javascript function that changes this into ROT13 (using this script: http://unixmonkey.net/?p=20) 3. once all links are obfuscated, pass the resulting block of text into another function that parses for all emails(this one has an email regex that reverses the email address and then adds a span to that email - to reverse it back) step 3 is supposed to clean the block of text for remaining emails that AREN'T in a href tags(meaning it wasn't parsed by hpricot). Problem with this is that the emails that were converted to ROT13 are still found by my regex. What i want to catch are just emails that WEREN'T CONVERTED to ROT13. How do i do this? well all emails the WERE CONVERTED have a trailing "'.replace" in them. meaning, i need to get all emails WITHOUT that string. so far i have this regex: /\b([A-Z0-9._%+-]+@[A-Z0-9.-]+.[A-Z]{2,4}('.replace))\b/i but this gets all the emails with the trailing '.replace i want to get the opposite and I'm currently stumped with this. any help from regex gurus out there? MORE INFO: Here's the regex + the block of text im parsing: http://www.rubular.com/r/NqXIHrNqjI as you can see, the first two 'email addresses' are already obfuscated using ROT13. I need a regex that gets the emails [email protected] and [email protected]

    Read the article

  • Python on Mac: Fink? MacPorts? Builtin? Homebrew? Binary installer?

    - by BastiBechtold
    For the last few days, I have been trying to use Python for some audio development. The thing is, Mac OSX does not handle uninstalling stuff well. Actually, there is no way to uninstall anything. Once it is on your system, you better pray that it didn't do any funny stuff. Hence, I don't really want to rely on installer packages for Python. So I turn to Homebrew and install Python using Homebrew. Works fabulously. Using pip, Numpy, SciPy, Matplotlib were no (big) problem, either. Now I want to play audio. There is a host of different packages out there, but pip does not seem willing to install any. But, there is a binary distribution for PyGame, which I guess should work with the built-in Python. Hence my question: What would you do? Would you just install the binary distributions and hope that they interoperate well and never need uninstalling? Would you hack your way through whichever package control management system you prefer and deal with its problems? Something else?

    Read the article

  • django-cms lighttpd redirect domain to url

    - by Robert
    Hello, I am using djano-cms for my site, but instead of language alias /en/ /de/ I need to use another domain. I would like to avoid running multiple django instances, and instead I would like to use lighttpd redirects if possible. I would like requests coming to domain2.com getting data from domain.com/en . The best would be if the user entering: domain2.com/offer got transparently data from domain.com/en/offer Tried many solutions with url.redirect, url.rewrite but none seems to work as desired. Also tried with: http://stackoverflow.com/questions/261904/matching-domains-with-regex-for-lighttpd-mod-evhost-www-domain-com-domain-com but that didn't work. Please help. This is my lighttpd configuration. $HTTP["host"] == "^domain2\.com" { url.redirect = ("^/(.*)" => "http://domain.com/en/$1") } $HTTP["host"] =~ "^domain\.com" { server.document-root = "/var/www/django/projects/domain/" accesslog.filename = "/var/log/lighttpd/domain.log-access.log" server.errorlog = "/var/log/lighttpd/www.domain-error.log" fastcgi.server = ( "/domain-service.fcgi" => ( "main" => ( "socket" => "/tmp/django-domain.sock", "check-local" => "disable", ) ), ) alias.url = ( "/media/" => "/var/www/django/projects/domain/media/", ) url.rewrite-once = ( "^(/site_media.*)$" => "$1", "^(/media.*)$" => "$1", "^/favicon\.ico$" => "/media/favicon.ico", "^(/.*)$" => "/domain-service.fcgi$1", } Thanks

    Read the article

  • protocol parsing in c

    - by nomad.alien
    I have been playing around with trying to implement some protocol decoders, but each time I run into a "simple" problem and I feel the way I am solving the problem is not optimal and there must be a better way to do things. I'm using C. Currently I'm using some canned data and reading it in as a file, but later on it would be via TCP or UDP. Here's the problem. I'm currently playing with a binary protocol at work. All fields are 8 bits long. The first field(8bits) is the packet type. So I read in the first 8 bits and using a switch/case I call a function to read in the rest of the packet as I then know the size/structure of it. BUT...some of these packets have nested packets inside them, so when I encounter that specific packet I then have to read another 8-16 bytes have another switch/case to see what the next packet type is and on and on. (Luckily the packets are only nested 2 or 3 deep). Only once I have the whole packet decoded can I handle it over to my state machine for processing. I guess this can be a more general question as well. How much data do you have to read at a time from the socket? As much as possible? As much as what is "similar" in the protocol headers? So even though this protocol is fairly basic, my code is a whole bunch of switch/case statements and I do a lot of reading from the file/socket which I feel is not optimal. My main aim is to make this decoder as fast as possible. To the more experienced people out there, is this the way to go or is there a better way which I just haven't figured out yet? Any elegant solution to this problem?

    Read the article

  • How might I wrap the FindXFile-style APIs to the STL-style Iterator Pattern in C++?

    - by BillyONeal
    Hello everyone :) I'm working on wrapping up the ugly innards of the FindFirstFile/FindNextFile loop (though my question applies to other similar APIs, such as RegEnumKeyEx or RegEnumValue, etc.) inside iterators that work in a manner similar to the Standard Template Library's istream_iterators. I have two problems here. The first is with the termination condition of most "foreach" style loops. STL style iterators typically use operator!= inside the exit condition of the for, i.e. std::vector<int> test; for(std::vector<int>::iterator it = test.begin(); it != test.end(); it++) { //Do stuff } My problem is I'm unsure how to implement operator!= with such a directory enumeration, because I do not know when the enumeration is complete until I've actually finished with it. I have sort of a hack together solution in place now that enumerates the entire directory at once, where each iterator simply tracks a reference counted vector, but this seems like a kludge which can be done a better way. The second problem I have is that there are multiple pieces of data returned by the FindXFile APIs. For that reason, there's no obvious way to overload operator* as required for iterator semantics. When I overload that item, do I return the file name? The size? The modified date? How might I convey the multiple pieces of data to which such an iterator must refer to later in an ideomatic way? I've tried ripping off the C# style MoveNext design but I'm concerned about not following the standard idioms here. class SomeIterator { public: bool next(); //Advances the iterator and returns true if successful, false if the iterator is at the end. std::wstring fileName() const; //other kinds of data.... }; EDIT: And the caller would look like: SomeIterator x = ??; //Construct somehow while(x.next()) { //Do stuff } Thanks! Billy3

    Read the article

  • Parse a CSV file using python (to make a decision tree later)

    - by Margaret
    First off, full disclosure: This is going towards a uni assignment, so I don't want to receive code. :). I'm more looking for approaches; I'm very new to python, having read a book but not yet written any code. The entire task is to import the contents of a CSV file, create a decision tree from the contents of the CSV file (using the ID3 algorithm), and then parse a second CSV file to run against the tree. There's a big (understandable) preference to have it capable of dealing with different CSV files (I asked if we were allowed to hard code the column names, mostly to eliminate it as a possibility, and the answer was no). The CSV files are in a fairly standard format; the header row is marked with a # then the column names are displayed, and every row after that is a simple series of values. Example: # Column1, Column2, Column3, Column4 Value01, Value02, Value03, Value04 Value11, Value12, Value13, Value14 At the moment, I'm trying to work out the first part: parsing the CSV. To make the decisions for the decision tree, a dictionary structure seems like it's going to be the most logical; so I was thinking of doing something along these lines: Read in each line, character by character If the character is not a comma or a space Append character to temporary string If the character is a comma Append the temporary string to a list Empty string Once a line has been read Create a dictionary using the header row as the key (somehow!) Append that dictionary to a list However, if I do things that way, I'm not sure how to make a mapping between the keys and the values. I'm also wondering whether there is some way to perform an action on every dictionary in a list, since I'll need to be doing things to the effect of "Everyone return their values for columns Column1 and Column4, so I can count up who has what!" - I assume that there is some mechanism, but I don't think I know how to do it. Is a dictionary the best way to do it? Would I be better off doing things using some other data structure? If so, what?

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Emptying the datastore in GAE

    - by colwilson
    I know what you're thinking, 'O not that again!', but here we are since Google have not yet provided a simpler method. I have been using a queue based solution which worked fine: import datetime from models import * DELETABLE_MODELS = [Alpha, Beta, AlphaBeta] def initiate_purge(): for e in config.DELETABLE_MODELS: deferred.defer(delete_entities, e, 'purging', _queue = 'purging') class NotEmptyException(Exception): pass def delete_entities(e, queue): try: q = e.all(keys_only=True) db.delete(q.fetch(200)) ct = q.count(1) if ct > 0: raise NotEmptyException('there are still entities to be deleted') else: logging.info('processing %s completed' % queue) except Exception, err: deferred.defer(delete_entities, e, then, queue, _queue = queue) logging.info('processing %s deferred: %s' % (queue, err)) All this does is queue a request to delete some data (once for each class) and then if the queued process either fails or knows there is still some stuff to delete, it re-queues itself. This beats the heck out of hitting the refresh on a browser for 10 minutes. However, I'm having trouble deleting AlphaBeta entities, there are always a few left at the end. I think because it contains Reference Properties: class AlphaBeta(db.Model): alpha = db.ReferenceProperty(Alpha, required=True, collection_name='betas') beta = db.ReferenceProperty(Beta, required=True, collection_name='alphas') I have tried deleting the indexes relating to these entity types, but that did not make any difference. Any advice would be appreciated please.

    Read the article

  • Online file storage similar to Amazon S3

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe RESTful but extras like mkdir and cp (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • Visual Studio hangs when deploying a cube

    - by Richie
    Hello All, I'm having an issue with an Analysis Services project in Visual Studio 2005. My project always builds but only occasionally deploys. No errors are reported and VS just hangs. This is my first Analysis Services project so I am hoping that there is something obvious that I am just missing. Here is the situation I have a cube that I have successfully deployed. I then make some change, e.g., adding a hierarchy to a dimension. When I try to deploy again VS hangs. I have to restart Analysis Services to regain control of VS so I can shut it down. I restart everything sometimes once, sometimes twice or more before the project will eventually deploy. This happens with any change I make there seems to be no pattern to this behaviour. Sometimes I have to delete the cube from Analysis Services before restarting everything to get a successful deploy. Also I have successfully deployed the cube, and then subsequently successfully reprocessed a dimension then when I open a query window in SQL Server Management Studio it says that it can find any cubes. As a test I have deployed a cube successfully. I have then deleted it in Analysis Services and attempted to redeploy it, without making any changes to the cube, only to have the same behaviour mentioned above. VS just hangs with no reason so I have no idea where to start hunting down the problem. It is taking 15-20 minutes to make a change as simple as setting the NameColumn of a dimension attribute. As you can imagine this is taking hours of my time so I would greatly appreciate any assistance anyone can give me.

    Read the article

  • Is it possible to have asynchronous processing

    - by prashant2361
    Hi, I have a requirement where I need to send continuous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client. I am looking for suggestions of this implementation at the server end. Basically what I need is this: 1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client 2. server process now waits for new client connections 3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required. Can we do something like this in Apache module: 1. Apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection 2. the Apache process though has returned the status to root process but it is also executing in parallel where it going through its global store and sending updates to the client, if any. So can a Apache process do these things: 1. Have more than one connection associated with it 2. Asynchronously waiting for new connection and at the same time processing the previous connections? Regards Prashant

    Read the article

  • Managed bean property value not set to null

    - by Vladimir
    Hi! I'm new to JSF, so this question might be strange. I have an inputText component's value bound to managed bean's property of type Float. I need to set property to null when inputText field is empty, not to 0 value. It's not done by default, so I added converter with the following method implemented: public Object getAsObject(FacesContext arg0, UIComponent arg1, String arg2) throws ConverterException { if (StringUtils.isEmpty(arg2)) { return null; } float result = Float.parseFloat(arg2); if (result == 0) { return null; } return result; } I registered converter, and assigned it to inputText component. I logged arg2 argument, and also logged return value from getAsObject method. By my log I can see that it returns null value. But, I also log setter property on backing bean and argument is 0 value, not null as expected. To be more precise, it is setter property is called twice, once with null argument, second time with 0 value argument. It still sets backing bean value to 0. How can I set value to null? Thanks in advance.

    Read the article

  • How do you keep text from wrapping in an NSTableView using NSAttributedString

    - by Justin
    I have an NSTableView that has 2 columns, one for an icon and the other for two lines of text. In the second column, the text column, I have some larger text that is for the name of an item. Then I have a new line and some smaller text that describes the state of the item. When the name becomes so large that it doesn't fit on one line it wraps (or when you shrink the window down so small that it causes the names to not fit on a single line). row1=============== | image |  some name   | | image |   idle               | row2================ | image |  some name really long name   | <- this gets wrapped pushing 'idle' out of the view | image |   idle               | =================== My question is, how could I keep the text from wrapping and just have the NSTableView display a horizontal scroll-bar once the name is too large to fit?

    Read the article

  • Need some help understanding this problem

    - by Legend
    I was wondering if someone could help me understand this problem. I prepared a small diagram because it is much easier to explain it visually. Problem I am trying to solve: 1. Constructing the dependency graph Given the connectivity of the graph and a metric that determines how well a node depends on the other, order the dependencies. For instance, I could put in a few rules saying that node 3 depends on node 4 node 2 depends on node 3 node 3 depends on node 5 But because the final rule is not "valuable" (again based on the same metric), I will not add the rule to my system. 2. Execute the request order Once I built a dependency graph, execute the list in an order that maximizes the final connectivity. First and foremost, I am wondering if I constructed the problem correctly and if I should be aware of any corner cases. Secondly, is there a closely related algorithm that I can look at? Currently, I am thinking of something like Feedback Arc Set or the Secretary Problem but I am a little confused at the moment. Any suggestions? PS: I am a little confused about the problem myself so please don't flame on me for that. If any clarifications are needed, I will try to update the question.

    Read the article

  • Sending files using Winsock - optimal send() data length?

    - by Meta
    I am using Winsock with non-blocking sockets to send a file to a client. The way I'm doing it right now is that I read a chunk of 8192 bytes from the file, and then loop until all of it successfully goes through send() (obviously handling WSAEWOULDBLOCK as it occurs). I then move on and read the next 8192 bytes, and so on... Although I can use any other number than 8192 when I test the transfer on my local machine, once I try it over a network, it seems like 8191 is the largest number I can use. When I try to use any number higher than 8191 (starting with 8192), the file transfer becomes extremely slow (about 5 times slower). Is there any reason why 8191 is so special? I've done some more testing and it turns out that using 8000 is slightly faster (by 0.5%). If you understand why 8191 is so special, can you tell me if there is a number better than the others (better than 8000)? I have a feeling that it has something to do with the fact that the default send buffer allocated to the socket by Winsock is 8KB, but I don't understand why. It might also have something to do with the Nagle algorithm, but again, I'm not sure how. Note that I have not modified the SO_SNDBUF option nor the TCP_NODELAY option. Or am I doing this all wrong? What's the best way of sending a file over a non-blocking socket?

    Read the article

  • How to efficiently store and update binary data in Mongodb?

    - by Rocketman
    I am storing a large binary array within a document. I wish to continually add bytes to this array and sometimes change the value of existing bytes. I was looking for some $append_bytes and $replace_bytes type of modifiers but it appears that the best I can do is $push for arrays. It seems like this would be doable by performing seek-write type operations if I had access somehow to the underlying bson on disk, but it does not appear to me that there is anyway to do this in mongodb (and probably for good reason). If I were instead to just query this binary array, edit or add to it, and then update the document by rewriting the entire field, how costly will this be? Each binary array will be on the order of 1-2MB, and updates occur once every 5 minutes and across 1000s of documents. Worse, yet there is no easy way to spread these out (in time) and they will usually be happening close to one another on the 5 minute intervals. Does anyone have a good feel for how disastrous this will be? Seems like it would be problematic. An alternative would be to store this binary data as separate files on disk, implement a thread pool to efficiently manipulate the files on disk, and reference the filename from my mongodb document. (I'm using python and pymongo so I was looking at pytables). I'd prefer to avoid this though if possible. Is there any other alternative that I am overlooking here? Thanks in advnace.

    Read the article

  • Is the size of a struct required to be an exact multiple of the alignment of that struct?

    - by Steve314
    Once again, I'm questioning a longstanding belief. Until today, I believed that the alignment of the following struct would normally be 4 and the size would normally be 5... struct example { int m_Assume_32_Bits; char m_Assume_8_Bit_Bytes; }; Because of this assumption, I have data structure code that uses offsetof to determine the distance in bytes between two adjacent items in an array. Today, I spotted some old code that was using sizeof where it shouldn't, couldn't understand why I hadn't had bugs from it, coded up a unit test - and the test surprised me by passing. A bit of investigation showed that the sizeof the type I used for the test (similar to the struct above) was an exact multiple of the alignment - ie 8 bytes. It had padding after the final member. Here is an example of why I never expected this... struct example2 { example m_Example; char m_Why_Cant_This_Be_At_Offset_6_Bytes; }; A bit of Googling showed examples that make it clear that this padding after the final member is allowed - for example http://en.wikipedia.org/wiki/Data_structure_alignment#Data_structure_padding (the "or at the end of the structure" bit). This is a bit embarrassing, as I recently posted this comment - Use of struct padding (my first comment to that answer). What I can't seem to determine is whether this padding to an exact multiple of the alignment is guaranteed by the C++ standard, or whether it is just something that is permitted and that some (but maybe not all) compilers do. So - is the size of a struct required to be an exact multiple of the alignment of that struct according to the C++ standard? If the C standard makes different guarantees, I'm interested in that too, but the focus is on C++.

    Read the article

  • Can you catch exceeded allocated memory error before it kills the script?

    - by kristovaher
    The thing is that I want to catch memory problems before they happen. I have a system that gets rows from database and casts the returned associative array to a variable, but I never know what the size of the database result is is or how much memory it will take once the database request is made. This means that my software can fail simply because memory is exceeded. But I want to avoid that somehow. One of the ways is to obviously make database requests that are smaller, but what if this is not possible or what if I do not know the size of data that is returned from database? Is it possible to 'catch' situations where memory use is exceeded in PHP? Something like this: $requestOk=memory_test(function(){ return doSomething(); }); if($requestOk){ // Memory seems fine // $requestOk now has the value from memory_test() function } else { // Function would have exceeded memory } I just find it problematic that my script can just die at any moment because of memory issues. From what I know, try-catch cannot be used here because it is a fatal error. Any help would be appreciated!

    Read the article

  • Combining the jquery Calendar picker with the jquery validation and getting them to play nice?

    - by MBonig
    I'm just starting to get into jquery/javascript programming so this may be a beginner question but I sure can't seem to find anything about it. I have a text field on a form. I've used the JQuery UI Calendar picker to make data entry better. I've also added jquery validation to ensure that if the user enters the date by hand that it's still valid. Here is the html: <input class="datepicker" id="Log_ServiceStartTime_date" name="Log_ServiceStartTime_date" type="text" value="3/16/2010"></input> and javascript: $('form').validate(); $('#Log_ServiceStartTime_date').rules('add','required'); $('#Log_ServiceStartTime_date').rules('add','date'); the problem I'm having is this: If a user puts in a bad date, or no date at all, the validation correctly kicks in and the error description displays. However, if the user clicks on the textbox to bring up the calendar picker and selects a date, the date fills into the field but the validation does not go away. If the user clicks into the textbox and then back out, the validation correctly goes away. How can I get the calendar picker to kick off the validation routine once a date is selected? thanks

    Read the article

  • How to insert zeros between bits in a bitmap?

    - by anatolyg
    I have some performance-heavy code that performs bit manipulations. It can be reduced to the following well-defined problem: Given a 13-bit bitmap, construct a 26-bit bitmap that contains the original bits spaced at even positions. To illustrate: 0000000000000000000abcdefghijklm (input, 32 bits) 0000000a0b0c0d0e0f0g0h0i0j0k0l0m (output, 32 bits) I currently have it implemented in the following way in C: if (input & (1 << 12)) output |= 1 << 24; if (input & (1 << 11)) output |= 1 << 22; if (input & (1 << 10)) output |= 1 << 20; ... My compiler (MS Visual Studio) turned this into the following: test eax,1000h jne 0064F5EC or edx,1000000h ... (repeated 13 times with minor differences in constants) I wonder whether i can make it any faster. I would like to have my code written in C, but switching to assembly language is possible. Can i use some MMX/SSE instructions to process all bits at once? Maybe i can use multiplication? (multiply by 0x11111111 or some other magical constant) Would it be better to use condition-set instruction (SETcc) instead of conditional-jump instruction? If yes, how can i make the compiler produce such code for me? Any other idea how to make it faster? Any idea how to do the inverse bitmap transformation (i have to implement it too, bit it's less critical)?

    Read the article

  • wmd editor, why does it keep showing html instead of just going straight to markup

    - by Ke
    hi, im wondering how wmd is supposed to work, when i type in the textarea the text doesnt have html, but once the text is stored in db it turns to html. wmd also shows all this html when reloading the content? is it supposed to work like this? Do I have to sanitize the text before its put into the db? if so how? I thought wmd doesnt deal with html? except in code blocks. Also there are p tags being added Using the beneath html it gets added directly. I guess this could cause xss attacks? - (1) <a onmouseover="alert(1)" href="#">read this!</a> - (2) <p <script>alert(1)</script>hello - (3) </td <script>alert(1)</script>hello I wonder how is wmd supposed to work? I thought it was supposed to enter everything in its own mark up, store its on mark up and retrieve it etc. instead of storing plain html Chees Ke

    Read the article

< Previous Page | 708 709 710 711 712 713 714 715 716 717 718 719  | Next Page >