Search Results

Search found 5435 results on 218 pages for 'ones and zeroes'.

Page 178/218 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • How to link a table to a field a in MySQL server

    - by Nek
    I have this data from a xml file: <?xml version="1.0" encoding="utf-8" ?> <words> <id>...</id> <word>...</word> <meaning>...</meaning> <translation> <ES>...</ES> <PT>...</PT> </translation> </words> This forms the table named "words", which has four fields ("id","word","meaning" and "translation"). On the other hand, the "translation" field can hold several languages like ES,PT,EN,JA,KO,etc... So I create a table ("words.translation", one field is "id" and the others ones are languages ids like "ES","PT",...). I'm sorry for this newby question, but I'd like to know a couple of things about this one-to-many relationship. How to join (or link?) this two tables in MySQL? What information does the "translation" field in the "words" table has to store? How is the sql query to get all the word information (JOIN syntax used?) Thanks for your patience.

    Read the article

  • How to overcome the programmer's block ?

    - by Nicolas Dorier
    How do you do when, during the development of your application, you can't decide yourself what to do next. You have no problem technically speaking, you have no problem to write clean code BUT you have a problem to decide yourself on what to code now. And you spend time thinking and thinking again on your design, in the car, in the shower, and you cannot write a single line of code... I think we call this "analysis paralysis". I hate being in this state ! How can you avoid this ? How do you do to not fall in this state ? I think this occurs when we are writting a big chunk of code with no visible improvement, but I'm not sure... UPDATE Like some of you said, this problem is also what we call the "programmer's block" (analogy with the writer's block). Doing some TDD doesn't help because I'm stuck, I can't decide myself what class to code, what methods to put inside (even a name of method). Though I admit that it helps to break a big chunk of code into smaller ones. Like Talesh said my head becomes full of "what-if".

    Read the article

  • Handling Application Logic in Multiple AsyncTask onPostExecute()s

    - by stormin986
    I have three simultaneous instances of an AsyncTask for download three files. When two particular ones finish, at the end of onPostExecute() I check a flag set by each, and if both are true, I call startActivity() for the next Activity. I am currently seeing the activity called twice, or something that resembles this type of behavior. Since the screen does that 'swipe left' kind of transition to the next activity, it sometimes does it twice (and when I hit back, it goes back to the same activity). It's obvious two versions of the activity that SHOULD only get called once are being put on the Activity stack. The only way I can find that this is possible is if both AsyncTasks' onPostExecute() executed SO simultaneously that they were virtually running the same lines at the same time, since I set the 'itemXdownloaded' flag to true right before I check for both and call startActivity(). But this is happening enough that it's very hard for me to believe that both downloads are finishing precisely at the same time and having their onPostExecute()s so close together... Any thoughts on what could be going on here? General gist of code (details removed, ignore any syntactical errors I may have edited in): // In onPostExecute() switch (downloadID) { case DL1: dl1complete = true; break; case DL2: dl2complete = true; break; case DL3: dl3complete = true; break; } // If 1 and 2 are done, move on (DL3 still going in background) if ( downloadID != DL3 && dl1complete && dl2complete) { ParentClass.this.startActivity(new Intent(ParentClass.this, NextActivity.class)); }

    Read the article

  • Why can't I simply copy installed Perl modules to other machines?

    - by pistacchio
    Being very new to Perl but not to dynamic languages, I'm a bit surprised at how not straight forward the manage of modules is. Sure, cpan X does theoretically work, but I'm working on the same project from three different machines and OSs (at work, at home, testing in an external environment). At work (Windows 7) I have problem using cpan because of our firewall that makes ftp unusable At home (Mac OS X) it does work In the external environment (Linux CentOs) it worked after hours because I don't have root access and I had to configure cpan to operate as a non-root user I've tried on another server where I have an access. If the previous external environment is a VPS and so I have a shell access, this other one is a cheap shared hosting where I have no way to install new modules other than the ones pre-installed At the moment I still can't install Template under Windows. I've seen that as an alternative I could compile it and I've also tried ActiveState's PPM but the module is not existent there. Now, my perplexity is about Perl being a dynamic language. I've had all these kind of problems while working, for example, with C where I had to compile all the libraries for all the platform, but I thought that with Perl the approach would have been very similar to Python's or PHP's where in 90% of the cases copying the module in a directory and importing it simply works. So, my question: if Perl's modules are written in Perl, why the copy/paste approach will not work? If some (or some part) of the modules have to be compiled, how to see in CPAN if a module is Perl-only or it relies upon compiled libraries? Isn't there a way to download the module (tar, zip...) and use cpan to deploy it? This would solve my problem under Windows.

    Read the article

  • Block facebook from my website

    - by Joseph Szymborski
    I have a secure link direction service I'm running (expiringlinks.co). If I change the headers in php to redirect my visitors, then facebook is able to show a preview of the website I'm redirecting to when users send links to one another via facebook. I wish to avoid this. Right now, I'm using an AJAX call to get the URL and javascript to redirect, but it's causing problems for users who don't use javascript. Here are a number of ways I'd like to block facebook, but I can't seem to get working: I've tried blocking the facebook bot (facebookexternalhit/1.0 and facebookexternalhit/1.1) but it's not working, I don't think they're using them for this functionality. I'm thinking of blocking the facebook IP addresses, but I can't find all of them, and I don't think it'll work unless I get all of them. I've thought of using a CAPTCHA or even a button, but I can't bring myself to do that to my visitors. Not to mention I don't think anyone would use the site. I've searched the facebook docs for meta tags that would "opt-me out", but haven't found one, and doubt that I would trust it if I had. Any creative ideas or any idea how to implement the ones above? Thank you so much in advance!

    Read the article

  • Synchronizing one or more databases with a master database - Foreign keys

    - by Ikke
    I'm using Google Gears to be able to use an application offline (I know Gears is deprecated). The problem I am facing is the synchronization with the database on the server. The specific problem is the primary keys or more exactly, the foreign keys. When sending the information to the server, I could easily ignore the primary keys, and generate new ones. But then how would I know what the relations are. I had one sollution in mind, bet the I would need to save all the pk for every client. What is the best way to synchronize multiple client with one server db. Edit: I've been thinking about it, and I guess seqential primary keys are not the best solution, but what other possibilities are there? Time based doesn't seem right because of collisions which could happen. A GUID comes to mind, is that an option? It looks like generating a GUID in javascript is not that easy. I can do something with natural keys or composite keys. As I'm thinking about it, that looks like the best solution. Can I expect any problems with that?

    Read the article

  • Distinct or group by on some columns but not others

    - by Nazadus
    I have a view that I'm trying to filter with something similar to DISTINCT on some columns but not others. I have a view like this: Name LastName Zip Street1 HouseholdID (may not be unique because it may have multiple addresses -- think of it in the logical sense as grouping persons but not physical locations; If you lookup HouseholdID 4130, you may get two rows.. or more, because the person may have mutiple mailing locations) City State I need to pull all those columns but filter on LastName,Zip, and Street1. Here's the fun part: The filter is arbitrary -- meaning I don't care which one of the duplicates goes away. This is for a mail out type thing and the other information is not used for any other reason than than to look up a specific person if needed (I have no idea why). So.. given one of the records, you can easily figure out the removed ones. As it stands now, my Sql-Fu fails me and I'm filtering in C# which is incredibly slow and is pretty much a foreach that starts with an empty list and adds the row in if the combined last name, zip, and street aren't are not in the list. I feel like I'm missing a simple / basic part of SQL that I should be understanding.

    Read the article

  • Boost Asio UDP retrieve last packet in socket buffer

    - by Alberto Toglia
    I have been messing around Boost Asio for some days now but I got stuck with this weird behavior. Please let me explain. Computer A is sending continuos udp packets every 500 ms to computer B, computer B desires to read A's packets with it own velocity but only wants A's last packet, obviously the most updated one. It has come to my attention that when I do a: mSocket.receive_from(boost::asio::buffer(mBuffer), mEndPoint); I can get OLD packets that were not processed (almost everytime). Does this make any sense? A friend of mine told me that sockets maintain a buffer of packets and therefore If I read with a lower frequency than the sender this could happen. ¡? So, the first question is how is it possible to receive the last packet and discard the ones I missed? Later I tried using the async example of the Boost documentation but found it did not do what I wanted. http://www.boost.org/doc/libs/1_36_0/doc/html/boost_asio/tutorial/tutdaytime6.html From what I could tell the async_receive_from should call the method "handle_receive" when a packet arrives, and that works for the first packet after the service was "run". If I wanted to keep listening the port I should call the async_receive_from again in the handle code. right? BUT what I found is that I start an infinite loop, it doesn't wait till the next packet, it just enters "handle_receive" again and again. I'm not doing a server application, a lot of things are going on (its a game), so my second question is, do I have to use threads to use the async receive method properly, is there some example with threads and async receive? Thanks for you attention.

    Read the article

  • Potential problem with C standard malloc'ing chars.

    - by paxdiablo
    When answering a comment to another answer of mine here, I found what I think may be a hole in the C standard (c1x, I haven't checked the earlier ones and yes, I know it's incredibly unlikely that I alone among all the planet's inhabitants have found a bug in the standard). Information follows: Section 6.5.3.4 ("The sizeof operator") para 2 states "The sizeof operator yields the size (in bytes) of its operand". Para 3 of that section states: "When applied to an operand that has type char, unsigned char, or signed char, (or a qualified version thereof) the result is 1". Section 7.20.3.3 describes void *malloc(size_t sz) but all it says is "The malloc function allocates space for an object whose size is specified by size and whose value is indeterminate". It makes no mention at all what units are used for the argument. Annex E startes the 8 is the minimum value for CHAR_BIT so chars can be more than one byte in length. My question is simply this: In an environment where a char is 16 bits wide, will malloc(10 * sizeof(char)) allocate 10 chars (20 bytes) or 10 bytes? Point 1 above seems to indicate the former, point 2 indicates the latter. Anyone with more C-standard-fu than me have an answer for this?

    Read the article

  • return only the last select results from stored procedure

    - by Madalina Dragomir
    The requirement says: stored procedure meant to search data, based on 5 identifiers. If there is an exact match return ONLY the exact match, if not but there is an exact match on the not null parameters return ONLY these results, otherwise return any match on any 4 not null parameters... and so on My (simplified) code looks like: create procedure xxxSearch @a nvarchar(80), @b nvarchar(80)... as begin select whatever from MyTable t where ((@a is null and t.a is null) or (@a = t.a)) and ((@b is null and t.b is null) or (@b = t.b))... if @@ROWCOUNT = 0 begin select whatever from MyTable t where ((@a is null) or (@a = t.a)) and ((@b is null) or (@b = t.b))... if @@ROWCOUNT = 0 begin ... end end end As a result there can be more sets of results selected, the first ones empty and I only need the last one. I know that it is easy to get the only the last result set on the application side, but all our stored procedure calls go through a framework that expects the significant results in the first table and I'm not eager to change it and test all the existing SPs. Is there a way to return only the last select results from a stored procedure? Is there a better way to do this task ?

    Read the article

  • Recommendations for Continuous integration for Mercurial/Kiln + MSBuild + MSTest

    - by TDD
    We have our source code stored in Kiln/Mercurial repositories; we use MSBuild to build our product and we have Unit Tests that utilize MSTest (Visual Studio Unit Tests). What solutions exist to implement a continuous integration machine (i.e. Build machine). The requirements for this are: A build should be kicked of when necessary (i.e. code has changed in the Repositories we care about) Before the actual build, the latest version of the source code must be acquired from the repository we are building from The build must build the entire product The build must build all Unit Tests The build must execute all unit tests A summary of success/failure must be sent out after the build has finished; this must include information about the build itself but also about which Unit Tests failed and which ones succeeded. The summary must contain which changesets were in this build that were not yet in the previous successful (!) build The system must be configurable so that it can build from multiple branches(/Repositories). Ideally, this system would run on a single box (our product isn't that big) without any server components. What solutions are currently available? What are their pros/cons? From the list above, what can be done and what cannot be done? Thanks

    Read the article

  • How to write efficient code for extracting Noun phrases?

    - by Arun Abraham
    I am trying to extract phrases using rules such as the ones mentioned below on text which has been POS tagged 1) NNP - NNP (- indicates followed by) 2) NNP - CC - NNP 3) VP - NP etc.. I have written code in this manner, Can someone tell me how i can do in a better manner. List<String> nounPhrases = new ArrayList<String>(); for (List<HasWord> sentence : documentPreprocessor) { //System.out.println(sentence.toString()); System.out.println(Sentence.listToString(sentence, false)); List<TaggedWord> tSentence = tagger.tagSentence(sentence); String lastTag = null, lastWord = null; for (TaggedWord taggedWord : tSentence) { if (lastTag != null && taggedWord.tag().equalsIgnoreCase("NNP") && lastTag.equalsIgnoreCase("NNP")) { nounPhrases.add(taggedWord.word() + " " + lastWord); //System.out.println(taggedWord.word() + " " + lastWord); } lastTag = taggedWord.tag(); lastWord = taggedWord.word(); } } In the above code, i have done only for NNP followed by NNP extraction, how can i generalise it so that i can add other rules too. I know that there are libraries available for doing this , but wanted to do this manually.

    Read the article

  • CMD Prompt > FTP Size Command

    - by Samuel Baldus
    I'm developing a PHP App on IIS 7.5, which uses PHP FTP commands. These all work, apart from ftp_size(). I've tested: cmd.exe > ftp host > username > password > SIZE filename = Invalid Command However, if I access the FTP site through an Internet Browser, the filesize is displayed. Do I need to install FTP Extensions, and if so, which ones and where do I get them? Here is the PHP Code: <?php // FTP Credentials $ftpServer = "www.domain.com"; $ftpUser = "username"; $ftpPass = "password"; // Unlimited Time set_time_limit(0); // Connect to FTP Server $conn = @ftp_connect($ftpServer) or die("Couldn't connect to FTP server"); // Login to FTP Site $login = @ftp_login($conn, $ftpUser, $ftpPass) or die("Login credentials were rejected"); // Set FTP Passive Mode = True ftp_pasv ($conn, true); // Build the file list $ftp_nlist = ftp_nlist($conn, "."); // Alphabetical sorting sort($ftp_nlist); // Display Output foreach ($ftp_nlist as $raw_file) { // Get the last modified time $mod = ftp_mdtm($conn, $raw_file); // Get the file size $size = ftp_size($conn, $raw_file); // Size is not '-1' => file if (!(ftp_size($conn, $raw_file) == -1)) { //output as file echo "Filename: $raw_file<br />"; echo "FileSize: ".number_format($size, '')."Kb</br>"; echo "Last Modified: ".date("d/m/Y H:i", $mod)."</br>"; } } ?>

    Read the article

  • Escape whitespace in paths using nautilus script

    - by Tommy Brunn
    I didn't think this would be as tricky as it turned out to be, but here I am. I'm trying to write a Nautilus script in Python to upload one or more images to Imgur just by selecting and right clicking them. It works well enough with both single images and multiple images - as long as they don't contain any whitespace. In fact, you can upload a single image containing whitespace, just not multiple ones. The problem is that NAUTILUS_SCRIPT_SELECTED_FILE_PATHS returns all the selected files and directories as a space separated string. So for example, it could look like this: print os.environment['NAUTILUS_SCRIPT_SELECTED_FILE_PATHS'] /home/nevon/Desktop/test image.png /home/nevon/Desktop/test.jpg What I need is a way to -either in bash or Python- escape the spaces in the path - but not the spaces that delimit different items. Either that, or a way to put quotation marks around each item. The ultimate solution would be if I could do that in bash and then send the items as separate arguments to my python script. Something like: python uploader.py /home/nevon/Desktop/test\ image.png /home/nevon/Desktop/test.jpg I've tried RTFM'ing, but there doesn't seem to be a lot of good solutions for this. At least not that I've found. Any ideas?

    Read the article

  • How is external memory, internal memory, and cache organized?

    - by goldenmean
    Consider a system as follows:= A hardware board having say ARM Cortex-A8 and Neon Vector coprocessor, and Embedded Linux OS running on Cortex-A8. On this environment, if there is some application - say, a video decoder is executing - then: How is it decided that which buffers would be in external memory, which ones would be allocated in internal SRAM, etc. When one says calloc/malloc on such system/code, the pointer returned is from which memory: internal or external? Can a user make buffers to be allocated to the memories of his choice (internal/external)? In ARM architectures, there is another memory called as Tightly coupled memory (TCM). What is that and how can user enable and use it? Can I declare buffers in this memory? Do I need to see the memory map (if any) of the hardware board to understand about all these different physical memories present in a typical hardware board? How much of a role does the OS play in distinguishing these different memories? Sorry for multiple questions, but i think they all are interlinked.

    Read the article

  • How can one convince a team to use a new technology (LinQ, MVC, etc )?

    - by Atomiton
    Obviously, it's easier to do with some developers, but I'm sure many of us are on teams that prefer the status quo. You know the type. You see some benefit in a piece of new technology and they prefer the tried and true methods. Try, for example, DBA/C# programmer the advantages of using LinQ ( not necessarily LinQ to SQL, just LinQ in general ). For example, When a project requirement is to be cross-platform... instead of thinking about how one can run Windows on a Mac through a VM Machine, introducing the idea of using relatively new Silverlight or creating it in Java ( as an option to look into ). I know most people don't like to be out of their comfort level, so it takes a bit of convincing, and not ALL new technology makes business sense... but how have you convinced your team to look at a new technology? What technologies have you successfully introduced to your workplace? What technologies do you think are hardest to introduce? ( I'm thinking paradigm-shifting ones, like MVC from WebForms... or new languages ) What strategies do you employ to make these new technologies appealing?

    Read the article

  • How do you organise multiple git repositories?

    - by dbr
    With SVN, I had a single big repository I kept on a server, and checked-out on a few machines. This was a pretty good backup system, and allowed me easily work on any of the machines. I could checkout a specific project, commit and it updated the 'master' project, or I could checkout the entire thing. Now, I have a bunch of git repositories, for various projects, several of which are on github. I also have the SVN repository I mentioned, imported via the git-svn command.. Basically, I like having all my code (not just projects, but random snippets and scripts, some things like my CV, articles I've written, websites I've made and so on) in one big repository I can easily clone onto remote machines, or memory-sticks/harddrives as backup. The problem is, since it's a private repository, and git doesn't allow checking out of a specific folder (that I could push to github as a separate project, but have the changes appear in both the master-repo, and the sub-repos) I could use the git submodule system, but it doesn't act how I want it too (submodules are pointers to other repositories, and don't really contain the actual code, so it's useless for backup) Currently I have a folder of git-repos (for example, ~/code_projects/proj1/.git/ ~/code_projects/proj2/.git/), and after doing changes to proj1 I do git push github, then I copy the files into ~/Documents/code/python/projects/proj1/ and do a single commit (instead of the numerous ones in the individual repos). Then do git push backupdrive1, git push mymemorystick etc So, the question: How do your personal code and projects with git repositories, and keep them synced and backed-up?

    Read the article

  • Native Endians and Auto Conversion

    - by KnickerKicker
    so the following converts big endians to little ones uint32_t ntoh32(uint32_t v) { return (v << 24) | ((v & 0x0000ff00) << 8) | ((v & 0x00ff0000) >> 8) | (v >> 24); } works. like a charm. I read 4 bytes from a big endian file into char v[4] and pass it into the above function as ntoh32 (* reinterpret_cast<uint32_t *> (v)) that doesn't work - because my compiler (VS 2005) automatically converts the big endian char[4] into a little endian uint32_t when I do the cast. AFAIK, this automatic conversion will not be portable, so I use uint32_t ntoh_4b(char v[]) { uint32_t a = 0; a |= (unsigned char)v[0]; a <<= 8; a |= (unsigned char)v[1]; a <<= 8; a |= (unsigned char)v[2]; a <<= 8; a |= (unsigned char)v[3]; return a; } yes the (unsigned char) is necessary. yes it is dog slow. there must be a better way. anyone ?

    Read the article

  • Giving writing permissions for IIS user at Windows 2003 Server

    - by Steve
    I am running a website over Windows 2003 Server and IIS6 and I am having problems to write or delete files in some temporary folder obtaining this kind of warmings: Warning: unlink(C:\Inetpub\wwwroot\cakephp\app\tmp\cache\persistent\myapp_cake_core_cake_): Permission denied in C:\Inetpub\wwwroot\cakephp\lib\Cake\Cache\Engine\FileEngine.php on line 254 I went to the tmp directory and at the properties I gave the IIS User the following permissions: Read & Execute List folder Contents Read And it still showing the same warnings. When I am on the properties window, if I click on Advanced the IIS username appears twice. One with Allow type and read & execute permissions and the other with Deny type and Special permissions. My question is: Should I give this user not only the Read & Execute permissions but also this ones?: Create Attributes Create Files/ Write Data Create Folders/ Append Data Delete Subfolders and Files Delete They are available to select if I Click on the edit button over the username. Wouldn't I be opening a security hole if I do this? Otherwise, how can I do to read and delete the files my website uses? Thanks.

    Read the article

  • Java: which configuration framework to use?

    - by Laimoncijus
    Hi, I need to decide which configuration framework to use. At the moment I am thinking between using properties files and XML files. My configuration needs to have some primitive grouping, e.g. in XML format would be something like: <configuration> <group name="abc"> <param1>value1</param1> <param2>value2</param2> </group> <group name="def"> <param3>value3</param3> <param4>value4</param4> </group> </configuration> or a properties file (something similar to log4j.properties): group.abc.param1 = value1 group.abc.param2 = value2 group.def.param3 = value3 group.def.param4 = value4 I need bi-directional (read and write) configuration library/framework. Nice feature would be - that I could read out somehow different configuration groups as different objects, so I could later pass them to different places, e.g. - reading everything what belongs to group "abc" as one object and "def" as another. If that is not possible I can always split single configuration object into smaller ones myself in the application initialization part of course. Which framework would best fit for me?

    Read the article

  • Memory leaks while using array of double

    - by Gacek
    I have a part of code that operates on large arrays of double (containing about 6000 elements at least) and executes several hundred times (usually 800) . When I use standard loop, like that: double[] singleRow = new double[6000]; int maxI = 800; for(int i=0; i<maxI; i++) { singleRow = someObject.producesOutput(); //... // do something with singleRow // ... } The memory usage rises for about 40MB (from 40MB at the beggining of the loop, to the 80MB at the end). When I force to use the garbage collector to execute at every iteration, the memory usage stays at the level of 40MB (the rise is unsignificant). double[] singleRow = new double[6000]; int maxI = 800; for(int i=0; i<maxI; i++) { singleRow = someObject.producesOutput(); //... // do something with singleRow // ... GC.Collect() } But the execution time is 3 times longer! (it is crucial) How can I force the C# to use the same area of memory instead of allocating new ones? Note: I have the access to the code of someObject class, so if it would be needed, I can change it.

    Read the article

  • Java applet loading images from external jars

    - by Mathias
    I have a jar on a server, and users should be able to develop extensions for it. Therefore the jars main class should be extended and some resources should be added to a second user created jar which will be loaded from another server or locally. Now I have problems accessing the resources (images) from the user loaded jars. Heres is the structure: My Server: game.jar containing game.class images.class ... image1.png (...) Local: user.jar containing: user.class extends game userimage.png The extension is loaded via Greasemonkey, it modifies the "archive" attribute to "/home/username/user.jar, game.jar" and the "code" attribute to "user.class". The user should be able to overwrite already defined images. If the image does not exist in game.jar, it is loaded correctly from user.jar. But the images loaded early in the game are always loaded from the game.jar, others seem to be overwritten correctly by the user. Is there a way to make sure they are always loaded in the correct order? This might be because of some caching mechanism. Because Greasemonkey removes the game from the page, changes the archive and code and reinsert it, the game is loaded without a mod for a brief second. In that time, images are loaded as expected from game jar, but those are the ones not being overwritable by the user. But how to avoid it? Another thing: If I overwrite the "run" method in user.class, the game is unable to load any image at all. Not from the user.jar and not from the game.jar. Java doesn't find the image, as the URL object "getClass().getResource(imagename)" returns with null. I tried to overwrite the image.class, but that doesn't fix the problem, unless I overwrite every class from game.class involved into calling the image.class

    Read the article

  • What is the performance impact of CSS's universal selector?

    - by Bungle
    I'm trying to find some simple client-side performance tweaks in a page that receives millions of monthly pageviews. One concern that I have is the use of the CSS universal selector (*). As an example, consider a very simple HTML document like the following: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <title>Example</title> <style type="text/css"> * { margin: 0; padding: 0; } </head> <body> <h1>This is a heading</h1> <p>This is a paragraph of text.</p> </body> </html> The universal selector will apply the above declaration to the body, h1 and p elements, since those are the only ones in the document. In general, would I see better performance from a rule such as: body, h1, p { margin: 0; padding: 0; } Or would this have exactly the same net effect? Essentially, what I'm asking is if these rules are effectively equivalent in this case, or if the universal selector has to perform more unnecessary work that I may not be aware of. I realize that the performance impact in this example may be very small, but I'm hoping to learn something that may lead to more significant performance improvements in real-world situations. Thanks for any help!

    Read the article

  • How to get at specific HTML elements of a document using C# and Hide them/Show them etc.

    - by LaserBeak
    Basically I want to load a HTML document and using controls such as multiple check boxes which will be programmed to hide, delete or show HTML elements with certain ID's. So I am thinking I would have to set an inline CSS property for visibility to: false on the ones I want to hide or delete them altogether when necessary. I need this so I don't have to edit my Ebay HTML templates in dreamweaver all the time, where I usually have to scroll around messy code and manually delete or add tags and their respective content. Whereas I just want to create one master template in dreamweaver which has all the variations that my products have, since they are all of the same genre with slight changes here and there and I just need to enable and disable the visibility of these variants as required and copy + paste the final html. I haven's used Windows Forms before, but tried doing this in WebForms which I do know a bit. I am able to get the result that I want by wrapping any HTML elements in a <asp:PlaceHolder></asp:PlaceHolder> and just setting that place holders visibility to false after the associated checkbox is checked and a postback occurs, finally I add a checkbox/button control that removes all the checkboxes, including itself etc for final html. But this method seems just like too much pain in the ass as I have to add the placeholder tags around everything that I need control over as ordinary html elements do not run at server, also webforms injects a bunch of Javascript and ViewState data so I don't have clean HTML which I can just copy after viewing the page source. Any tips/code that you can suggest to achieve the desired effect with the least changes required to existing HTML documents? Ideally I would want to load the HTML document in, have a live design preview of it and underneath have a bunch of well labelled checkboxes programmed to hide, delete or show elements with certain ID's. Thanks...

    Read the article

  • JQuery: how to store records id for data from ajax query in dynamicly created html elements

    - by grapkulec
    Probably question title is rather cryptic but I will try to explain myself here so please bare with me :) Let's assume this configuration: server-side is PHP application responding for requests with data (list of items, single item details, etc.) in json format client-side is JQuery application sending ajax request to that PHP app and creating html content corresponding with received data So, for example: client requests "list of all animals with names staring with 'A'", gets the json response from server, and for every "animal" creates some html gizmo like div with animal description or something like that. It doesn't really matter what html element it will be but it has to point exactly to specific record by "containing" id of that record. And here is my dilemma: is it good solution to use "id" property for that? So it would be like: <div id="10" class="animal"> <p> This is animal of very mysterious kind... </p> </div> <div id="11" class="animal"> <p> And this one is very common to our country... </p> </div> where id="10" is of course indication that this is representation of record with id = 10. Or maybe I should store this record id in some custom made tag like <record_id>10</record_id> and leave an "id" strictly for what it was meant to be (css selector)? I need that record id for further stuff like updating database with some user input or deleting some of "animals" or creating new ones or anything that will be needed. All manipulations will be done with JQuery and ajax requests and responses will be visualized also with dynamic creation of html interface. I'm sure that somebody had to deal with that kind of stuff before so I would be grateful for some tips on that topic.

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >