Search Results

Search found 23961 results on 959 pages for 'evidence based scheduling'.

Page 599/959 | < Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >

  • How "duplicated" Java code is optimized by the JVM JIT compiler?

    - by Renan Vinícius Mozone
    I'm in charge of maintaining a JSP based application, running on IBM WebSphere 6.1 (IBM J9 JVM). All JSP pages have a static include reference and in this include file there is some static Java methods declared. They are included in all JSP pages to offer an "easy access" to those utility static methods. I know that this is a very bad way to work, and I'm working to change this. But, just for curiosity, and to support my effort in changing this, I'm wondering how these "duplicated" static methods are optimized by the JVM JIT compiler. They are optimized separately even having the exact same signature? Does the JVM JIT compiler "sees" that these methods are all identical an provides an "unified" JIT'ed code?

    Read the article

  • DSP - How are frequency amplitudes modified using DFT?

    - by Trap
    I'm trying to implement a DFT-based equalizer (not FFT) for the sole purpose of learning. To check if it works I took an audio signal, analyzed it and then resynthesized it again with no modifications made to the frequency spectrum. So far so good. Now I tried to silence some frequency bands, just by setting their amplitudes to zero before resynthesis, but definitely it's not the way to go. What I get is a rather distorted signal. I'm using the so-called 'standard way of calculating the DFT' which is by correlation. I first tried to modify the real part amplitudes only, then modifying both the real and imaginary part amplitudes. I also tried to convert the DFT output to polar notation, then modifying the magnitude and convert back to rectangular notation, but none of this is working. Can someone show me what I'm doing wrong? I tried to find info on this subject in the internet but couldn't find any. Thanks in advance.

    Read the article

  • Example of code generator you made from scratch?

    - by rosscj2533
    What are some examples of code generators you have used? I think it's a cool idea, but I have trouble thinking of things they can do besides make a class based on an object's attributes/database schema (as described in The Pragmatic Programmer). What language did you write them in and what language did they output? Edit: Thanks for the responses so far. What I am really looking for is examples of code generators made from scratch for some certain purpose. I mentioned it in the title, but didn't make it very clear in my question. How did you go about making a code generator on your own and what specificly did it achieve?

    Read the article

  • LazyList<T> vs System.Lazy<List<T>> in ASP.NET MVC 2?

    - by FreshCode
    In Rob Conery's Storefront series, Rob makes extensive use of the LazyList<..> construct to pull data from IQueryables. How does this differ from the System.Lazy<...> construct now available in .NET 4.0 (and perhaps earlier)? More depth based on DoctaJones' great answer: Would you recommend one over the other if I wanted to operate on IQueryable as a List<T>? I'm assuming that since Lazy<T> is in the framework now, it is a safer bet for future support and maintainability? If I want to use a strong type instead of an anonymous (var) type would the following statements be functionally equivalent? Lazy<List<Products>> Products = new Lazy<List<Product>>(); LazyList<Product> = new LazyList<Product>();

    Read the article

  • dynamic log4net appender name?

    - by sanjeev40084
    Let's say i have 3 smtp appenders in same log4net file whose names are: <appender name = "emailDevelopment".. /> <appender name = "emailBeta".. /> <appender name = "emailProduction".. /> Let's say i have 3 different servers(Dev, Beta, Production). Depending upon the server, i want to fire the log. In case of Development server, it would fire log from "emailDevelopment". I have a system variable in each server named "ApplicationEnvironment" whose value is Development, Beta, Production based on the server names. Now is there anyway i can setup root in log4net so that it fires email depending upon the server name. <root> <priority value="ALL" /> <appender-ref ref="email<environment name from whose appender should be used>" /> </root>

    Read the article

  • Newbie questions about COM

    - by smwikipedia
    Hi, I am quite new to COM so the question may seem naive. Q1. About Windows DLL Based on my understanding, a Windows DLL can export functions, types(classes) and global variables. Is this understanding all right? Q2. About COM My naive understanding is that: a COM DLL seems to be just a new logical way to organize the functions and types exported by a standard Windows DLL. A COM DLL exports both functions such as DllRegisterServer() and DllGetClassObject(), and also the Classes which implements the IUnknown interface. Is this understanding all right? Q3. *.def & *.idl *.def is used to define the functions exported by a Windows DLL in the traditional way, such as DllGetClassObject(). *.idl is used to define the interface implemented by a COM coclass. Thanks in advance.

    Read the article

  • linux locale unset

    - by naugtur
    I have a ARM based machine with ubuntu distro on it and it often feeds me with this while running various commands: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "pl_PL.UTF-8" This is output of the locale command locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory LANG=pl_PL.UTF-8 LC_CTYPE="pl_PL.UTF-8" LC_NUMERIC="pl_PL.UTF-8" LC_TIME="pl_PL.UTF-8" LC_COLLATE="pl_PL.UTF-8" LC_MONETARY="pl_PL.UTF-8" LC_MESSAGES="pl_PL.UTF-8" LC_PAPER="pl_PL.UTF-8" LC_NAME="pl_PL.UTF-8" LC_ADDRESS="pl_PL.UTF-8" LC_TELEPHONE="pl_PL.UTF-8" LC_MEASUREMENT="pl_PL.UTF-8" LC_IDENTIFICATION="pl_PL.UTF-8" LC_ALL= What should I do to stop it from popping now and then and configure it properly for the aescznól [important characters of mine]?

    Read the article

  • Simple ranking algorithm in Groovy

    - by Richard Paul
    I have a short groovy algorithm for assigning rankings to food based on their rating. This can be run in the groovy console. The code works perfectly, but I'm wondering if there is a more Groovy or functional way of writing the code. Thinking it would be nice to get rid of the previousItem and rank local variables if possible. def food = [ [name:'Chocolate Brownie',rating:5.5, rank:null], [name:'Pizza',rating:3.4, rank:null], [name:'Icecream', rating:2.1, rank:null], [name:'Fudge', rating:2.1, rank:null], [name:'Cabbage', rating:1.4, rank:null]] food.sort { -it.rating } def previousItem = food[0] def rank = 1 previousItem.rank = rank food.each { item -> if (item.rating == previousItem.rating) { item.rank = previousItem.rank } else { item.rank = rank } previousItem = item rank++ } assert food[0].rank == 1 assert food[1].rank == 2 assert food[2].rank == 3 assert food[3].rank == 3 // Note same rating = same rank assert food[4].rank == 5 // Note, 4 skipped as we have two at rank 3 Suggestions?

    Read the article

  • Update frequency for dynamic favicons

    - by rosscowar
    I wanted to learn how to dynamically update the favicon using the Google Chrome browser and I've noticed that the browser seems to throttle how often you can update the favicon per second and that sort of makes things look sloppy. The test page I've made for this is: http://staticadmin.com/countdown.html Which is simply a scrolling message displaying the results of a countdown. I added an input field to tweak how many pixels per second are moved by the script and I've eyeballed the max to be about 5 frames per second smoothly in Google Chrome and I have not tested it in any other browsers. My question is what is the maximum frequency, are there any ways to change, is there a particular reason behind it? NOTE: I've also noticed that this value changes based on window focus as well. It seems to drop to about 1 update per second when the browser's window isn't in focus and returns to "max" when you return.

    Read the article

  • Slow insert speed in Postgresql memory tablespace

    - by Prashant
    Hi, I have a requirement where I need to store the records at rate of 10,000 records/sec into a database (with indexing on a few fields). Number of columns in one record is 25. I am doing a batch insert of 100,000 records in one transaction block. To improve the insertion rate, I changed the tablespace from disk to RAM.With that I am able to achieve only 5,000 inserts per second. I have also done the following tuning in the postgres config: Indexes : no fsync : false logging : disabled Other information: - Tablespace : RAM - Number of columns in one row : 25 (mostly integers) - CPU : 4 core, 2.5 GHz - RAM : 48 GB I am wondering why a single insert query is taking around 0.2 msec on average when database is not writing anything on disk (as I am using RAM based tablespace). Is there something I am doing wrong? Help appreciated. Prashant

    Read the article

  • Problems with MVC Ajax.ActionLink and returning a PartialView

    - by mwright
    I'm trying to implement a simple Ajax update using MVC and have run into an issue. My understanding of how to implement Ajax with MVC is to use an Ajax.ActionLink which allows the content to be updated based on user interaction. I have an Ajax.ActionLink that looks like the following: <%= Ajax.ActionLink("Call Ajax", "Ajax", new AjaxOptions{UpdateTargetId = "updateDiv"}) %> If, in the controller, I return a string it works fine. However, when returning a PartialView instead, nothing happens. I can step through and verify that the controller is "returning" the partial view but nothing shows up in what I'm calling the updateDiv. How can I go about determining what the problem is?

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • Test Application Guide for Winforms

    - by Jonathan
    I'm a c# developer in a medium/small company. I use to do quick test of the apps that my workmates made and they use to test my applications. We test each form based in our experience. (yes, I know this is not a very formal method) Now a new guy without experience are going to join our team. We think now is the momento to make a little list of things that all we should test in each form. Divided by categories. For example usability: Test that the taborder of each control are properly setted, or Valitacion: Test that the max lenght of each textbox match with the max lenght of a field in the DB...etc We don't one to Reinvent the Wheels, so I want to know if such kind of document already exists. Thanks

    Read the article

  • How to redirect to a controller action from a JsonResult method in asp.net mvc?

    - by Pandiya Chendur
    I am fetching records for a user based on his UserId as a JsonResult... public JsonResult GetClients(int currentPage, int pageSize) { if (Session["UserId"] != "") { var clients = clirep.FindAllClients().AsQueryable(); var count = clients.Count(); var results = new PagedList<ClientBO>(clients, currentPage - 1, pageSize); var genericResult = new { Count = count, Results = results }; return Json(genericResult); } else { //return RedirectToAction("Index","Home"); } } How to redirect to a controller action from a JsonResult method in asp.net mvc?Any suggestion...

    Read the article

  • Software to Tune/Calibrate Properties for Heuristic Algorithms

    - by Karussell
    Today I read that there is a software called WinCalibra (scroll a bit down) which can take a text file with properties as input. This program can then optimize the input properties based on the output values of your algorithm. See this paper or the user documentation for more information (see link above; sadly doc is a zipped exe). Do you know other software which can do the same which runs under Linux? (preferable Open Source) EDIT: Since I need this for a java application I will now invest my research in java libraries like jgap. Other ideas and links would be appreciated!

    Read the article

  • How do I hide an inherited __published property in the derived class in a VCL component?

    - by Gary Benade
    I have created a new VCL component based on an existing VCL component. What I want to do now is set the Password and Username properties from an ini file instead of the property inspector. Robert Dunn Link I read on the delphi forum above you cannot unpublish a property and that the only workaround is to redeclare the property as read-only. I tried this but it all it does is make the property read only and grayed out in the object inspector. While this could work I would prefer if the property wasn't visible at all. __property System::UnicodeString Password = {read=FPassword}; Thanks in advance for any help or links to c++ VCL component writing tutorials. I am using CB2010

    Read the article

  • JAVA Classes in Game programming.

    - by Gabriel A. Zorrilla
    I'm doing a little strategy game to help me learn Java in a fun way. The thing is I visioned the units as objects that would self draw on the game map (using images and buffering) and would react to the mouse actions with listeners attached to them. Now, based on some tutorials I've been reading regarding basic game programming, all seems to be drawn in the Graphics method of my Map class. If a new unit emerges, i just update the Map.Graphics method, it's not as easy as making a new Unit object which would self draw... In this case, I'd be stuck with a whole bunch of Map methods instead of using classes for rendering new things. So my question is, is it possible to use classes for rendering units, interface objects, etc, or i'll have to create methods and just do some kind of structural programming instead of object oriented? I'm a little bit confused and I'd like to have a mental blueprint of how things would be organized. Thanks!

    Read the article

  • .NET SortedDictionary But Sorted By Values

    - by Michael Covelli
    I need a data structure that acts like a SortedDictionary<int, double> but is sorted based on the values rather than the keys. I need it to take about 1-2 microseconds to add and remove items when we have about 3000 items in the dictionary. My first thought was simply to switch the keys and values in my code. This very nearly works. I can add and remove elements in about 1.2 microseconds in my testing by doing this. But the keys have to be unique in a SortedDictionary so that means that values in my inverse dictionary would have to be unique. And there are some cases where they may not be. Any ideas of something in the .NET libraries already that would work for me?

    Read the article

  • Is it possible to use ContainsTable to get results for more than one column?

    - by LockeCJ
    Consider the following table: People FirstName nvarchar(50) LastName nvarchar(50) Let's assume for the moment that this table has a full-text index on it for both columns. Let's suppose that I wanted to find all of the people named "John Smith" in this table. The following query seems like a perfectly rational way to accomplish this: SELECT * from People p INNER JOIN CONTAINSTABLE(People,*,'"John*" AND "Smith*"') Unfortunately, this will return no results, assuming that there is no record in the People table that contains both "John" and "Smith" in either the FirstName or LastName columns. It will not match a record with "John" in the FirstName column, and "Smith" in the LastName column, or vice-versa. My question is this: How does one accomplish what I'm trying to do above? Please consider that the example above is simplified. The real table I'm working with has ten columns and the input I'm receiving is a single string which is split up based on standard word breakers (space, dash, etc.)

    Read the article

  • How to pass a string containing both single and double quotes as a parameter to XSLT in PHP?

    - by Boaz
    Hi, I have a simple PHP-based XSLT trasform code that looks like that: $xsl = new XSLTProcessor(); $xsl->registerPHPFunctions(); $xsl->setParameter("","searchterms", $searchterms); $xsl->importStylesheet($xslDoc); echo $xsl->transformToXML($doc); The code passes the variable $searchterms, which contains a string, as a parameter to the XSLT style sheet which in turns uses it as a text: <title>search feed for <xsl:value-of select="$searchterms"/></title> This works fine until you try to pass a string with mixes in it, say: $searchterms = '"some"'." text's quotes are mixed." In that point the XSLT processor screams: Cannot create XPath expression (string contains both quote and double-quotes) What is the correct way to safely pass arbitrary strings as input to XSLT? Note that these strings will be used as a text value in the resulting XML and not as an XPATH paramater. Thanks, Boaz

    Read the article

  • Microsecond (or one ms) time resolution on an embedded device (Linux Kernel)

    - by ChrisDiRulli
    Hey guys, I have a kernel module I've built that requires at least 1 ms time resolution. I currently use do_gettimeofday() but I'm concerned that this won't work once I move my module to an embedded device. The device has a 180 Mz processor (MIPS) and the default HZ value in the kernel is 100. Thus using jiffies will only give me at best 10 ms resolution. That won't cut it. What I'd like to know is if do_gettimeofday() is based on the timer interrupt (HZ). Can it be guaranteed to provide at least 1 ms of resolution? Thanks!

    Read the article

  • How to Add Serialized LINQ to SQL Entities to a Word 2007 Document

    - by Ryan Riley
    I built a template-based document generator using the Open XML SDK (1.0), the Word 2007 Content Control Toolkit and LINQ to SQL (using the CodeSmith PLINQO templates). To do this, I serialized the LINQ to SQL entities to XML by retrieving the entity using DataLoadOptions specified in the source code. This works great, except that to initially populate the XML in my template, I currently have to copy and paste the XML from the Immediate window in VS2008 into the Content Control Toolkit, and it still has all the data from the current entity. I'm looking for two solutions: 1) Is this a good way to build a document generator with Word 2007? 1) How can I generate just the XML I need without the data? I've thought of creating an XSD and then creating an empty XML document, but wasn't sure how to do that programatically so that a business user can get the XML for the template. (That's not a requirement, just a nice-to-have.) Thanks for your feedback, Ryan

    Read the article

  • Inherit a parent class docstring as __doc__ attribute

    - by Reinout van Rees
    There is a question about Inherit docstrings in Python class inheritance, but the answers there deal with method docstrings. My question is how to inherit a docstring of a parent class as the __doc__ attribute. The usecase is that Django rest framework generates nice documentation in the html version of your API based on your view classes' docstrings. But when inheriting a base class (with a docstring) in a class without a docstring, the API doesn't show the docstring. It might very well be that sphinx and other tools do the right thing and handle the docstring inheritance for me, but django rest framework looks at the (empty) .__doc__ attribute. class ParentWithDocstring(object): """Parent docstring""" pass class SubClassWithoutDoctring(ParentWithDocstring): pass parent = ParentWithDocstring() print parent.__doc__ # Prints "Parent docstring". subclass = SubClassWithoutDoctring() print subclass.__doc__ # Prints "None" I've tried something like super(SubClassWithoutDocstring, self).__doc__, but that also only got me a None.

    Read the article

  • Architectural advice on connecting multiple diverse sites into a single community.

    - by Aleksandar
    Hi SO, I've been given a task to connect multiple sites of the same client into a single network. So i would like to hear an architectural advice on connecting these sites into a single community. These sites include: 1. Invision Power Board Forum (the most important site) 2. 3 custom made cms-s (changes to code allowable) 3. 1 drupal site 4. 3-4 wordpress blogs Requirements are as follows: 1. Connecting all users of all sites into a single administrable entity. With permissions changing ability, users banning etc. 2. Later on, based on this implementation I have to implement "facebook like" chat, which will be available to all users regardless of place of login. I have few ideas on my mind on how to go with this, but would like to hear some people with more experience and expertize than my self. Cheers!

    Read the article

  • Actionscript 3 introspection -- function names

    - by Markus O'reilly
    I am trying to iterate through each of the members of an object. For each member, I check to see if it is a function or not. If it is a function, I want to get the name of it and perform some logic based on the name of the function. I don't know if this is even possible though. Is it? Any tips? example: var mems: Object = getMemberNames(obj, true); for each(mem: Object in members) { if(!(mem is Function)) continue; var func: Function = Function(mem); //I want something like this: if(func.getName().startsWith("xxxx")) { func.call(...); } } I'm having a hard time finding much on doing this. Thanks for the help.

    Read the article

< Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >