Search Results

Search found 16752 results on 671 pages for 'multi language'.

Page 428/671 | < Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >

  • How to deal with the new line character in the Silverlight TextBox

    - by Ian Oakes
    When using a multi-line TextBox (AcceptsReturn="True") in Silverlight, line feeds are recorded as \r rather than \r\n. This is causing problems when the data is persisted and later exported to another format to be read by a Windows application. I was thinking of using a regular expression to replace any single \r characters with a \r\n, but I suck at regex's and couldn't get it to work. Because there may be a mixture of line endings just blindy replacing all \r with \r\n doesn't cut it. So two questions really... If regex is the way to go what's the correct pattern? Is there a way to get Silverlight to respect it's own Environment.NewLine character in TextBox's and have it insert \r\n rather just a single \r?

    Read the article

  • How to get site context/information during the PreapplicationStartMethod

    - by Mike
    When you run the same web based application as a multi-tenant application for different clients is there a way during the PreapplicationStartMethod to gain some kind of context to the site that is being started? More specifically I'd like to get the host header information (the "bindingInformation" attribute value from the applicationHost.config); I have found ways to get this information at the time of a specific request long after the application has started. Is there a way to get the information during the application startup process? This is an MVC 3 application and IIS 7.5.

    Read the article

  • How can we get unique elements from any ORDER BY DECREASING OR INCREASING

    - by Mohit
    Code given below is taken from the stackoverflow.com !!! Can anyone tell me how to get the array elements order by decreaseing or increasing !! plz help me !!! Thanks in advance $contents = file_get_contents($htmlurl); // Get rid of style, script etc $search = array('@<script[^>]*?>.*?</script>@si', // Strip out javascript '@<head>.*?</head>@siU', // Lose the head section '@<style[^>]*?>.*?</style>@siU', // Strip style tags properly '@<![\s\S]*?--[ \t\n\r]*>@' // Strip multi-line comments including CDATA ); $contents = preg_replace($search, '', $contents); $result = array_count_values( str_word_count( strip_tags($contents), 1 ) ); print_r($result);

    Read the article

  • What's the fastest way to scrape a lot of pages in php?

    - by Yegor
    I have a data aggregator that relies on scraping several sites, and indexing their information in a way that is searchable to the user. I need to be able to scrape a vast number of pages, daily, and I have ran into problems using simple curl requests, that are fairly slow when executed in rapid sequence for a long time (the scraper runs 24/7 basically). Running a multi curl request in a simple while loop is fairly slow. I speeded it up by doing individual curl requests in a background process, which works faster, but sooner or later the slower requests start piling up, which ends up crashing the server. Are there more efficient ways of scraping data? perhaps command line curl?

    Read the article

  • Build Pipelining and Continuous Integration with Maven and Hudson

    - by Brandon
    Currently the my team is considering splitting our single CI build process into a more streamlined multi-stage process to speed up basic build feedback and isolate different ci concerns. The idea we had was to have each stage exist in Hudson as a different build with the correct maven goal or maven plugin execution, then chain them together using the post-build hooks of Hudson. However to my knowledge, Maven as a build tool mandates that any lifecycle phase which is performed automatically builds every preceding lifecycle phase. This presents a number of problems the most significant of which is that maven is recreating the build resources with each distinct call and not using those of the previous stage. This not only breaks the consistency of the build lifecycle but has much more unnecessary processing overhead. Is there a way to accomplish pipelining with CI using Maven? Assuming there is, is there a way to let Hudson know to use those resources built from the previous stage in the next one?

    Read the article

  • With Maven2, how would I specify a custom directory to which a dependency should be copied?

    - by Benny
    Basically, I have a multi-module project consisting of 5 different modules. One of the modules is kind of the parent module to the other 4, meaning the other 4 need to be built before the 5th, so you could say that each of the 4 modules is a dependency of the 5th. Thus, I've made dependency entries for each of the modules in the 5th module's pom.xml. However, when I build the project, I don't want those 4 dependencies copied to the "lib" directory of the 5th module. I'd like to specify the directory into which each of them should be placed explicitly. Is there any way to do this with Maven2? Thanks for your help, B.J.

    Read the article

  • What would be a good "CMS" for me to use?

    - by Tim Geerts
    Hey, I'm looking for some sort of CMS system to implement here in terms of "documentation" system. Now, I'm not to sure about which system(s) would suit my needs best, so I thought I'd come here and type up my requirements so you could help me in narrowing down all the different options. One important note to make is that I'm not looking at a system where I can store certain documents (word, pdf, whatever). Rather at a system where I can type the "documentation"-text in some sort of post (like a blog). Requirements: - Multilanguage support - Tagging - Decent search support (tags, groupings, categories) - Version-control of posts/articles - Possibility of exporting post(s) to a pdf file - Support for multi-user (usergroup X can only see those posts, usergroup Y can see others, etc...) I know, these are some strange requirements if they're all combined, and I reckon most of you would perhaps say that I'd have to develop something like this inhouse rather then finding a descent working product out there (open source if possible). None the less, I thought I'd at least ask the opinion of y'all. Regards, Tim

    Read the article

  • HTTP request stream not readable outside of request handler

    - by Jason Young
    I'm writing a fairly complicated multi-node proxy, and at one point I need to handle an HTTP request, but read from that request outside of the "http.Server" callback (I need to read from the request data and line it up with a different response at a different time). The problem is, the stream is no longer readable. Below is some simple code to reproduce the issue. Is this normal, or a bug? function startServer() { http.Server(function (req, res) { req.pause(); checkRequestReadable(req); setTimeout(function() { checkRequestReadable(req); }, 1000); setTimeout(function() { res.end(); }, 1100); }).listen(1337); console.log('Server running on port 1337'); } function checkRequestReadable(req) { //The request is not readable here! console.log('Request writable? ' + req.readable); } startServer();

    Read the article

  • How should my team decide between 3-tier and 2-tier architectures?

    - by j0rd4n
    My team is discussing the future direction we take our projects. Half the team believes in a pure 3-tier architecture while the other half favors a 2-tier architecture. Project Assumptions: Enterprise business applications Business logic needed between user and database Data validation necessary Service-oriented (prefer RESTful services) Multi-year maintenance plan Support hundreds of users 3-tier Team Favors: Persistant layer <== Domain layer <== UI layer Service boundary between at least persistant layer and domain layer. Domain layer might have service boundary between it. Translations between each layer (clean DTO separation) Hand roll persistance unless we can find creative yet elegant automation 2-tier Team Favors: Entity Framework + WCF Data Service layer <== UI layer Business logic kept in WCF Data Service interceptors Minimal translation between layers - favor faster coding So that's the high-level argument. What considerations should we take into account? What experiences have you had with either approach?

    Read the article

  • Can in-memory SQLite databases scale with concurrency?

    - by Kent Boogaart
    In order to prevent a SQLite in-memory database from being cleaned up, one must use the same connection to access the database. However, using the same connection causes SQLite to synchronize access to the database. Thus, if I have many threads performing reads against an in-memory database, it is slower on a multi-core machine than the exact same code running against a file-backed database. Is there any way to get the best of both worlds? That is, an in-memory database that permits multiple, concurrent calls to the database?

    Read the article

  • Most efficient way of checking for a return from a function call in Perl

    - by Gaurav Dadhania
    I want to add the return value from the function call to an array iff something is returned (not by default, i.e. if I have a return statement in the subroutine.) so I'm using unshift @{$errors}, "HashValidator::$vfunction($hashref)"; but this actually adds the string of the function call to the array. I also tried unshift @{$errors}, $temp if defined my $temp = "HashValidator::$vfunction($hashref)"; with the same result. What would a perl one-liner look like that does this efficiently (I know I can do the ugly, multi-line check but I want to learn). Thanks,

    Read the article

  • Can in-memory SQLite databases be used concurrently?

    - by Kent Boogaart
    In order to prevent a SQLite in-memory database from being cleaned up, one must use the same connection to access the database. However, using the same connection causes SQLite to synchronize access to the database. Thus, if I have many threads performing reads against an in-memory database, it is slower on a multi-core machine than the exact same code running against a file-backed database. Is there any way to get the best of both worlds? That is, an in-memory database that permits multiple, concurrent calls to the database?

    Read the article

  • How to pass around event as parameter in c#

    - by Jerry Liu
    Am writing unit test for a multi-threading application, where I need to wait until a specific event triggered so that I know the asyn operation is done. E.g. When I call repository.add(something), I wait for event AfterChange before doing any assertion. So I write a util function to do that. public static void SyncAction(EventHandler event_, Action action_) { var signal = new object(); EventHandler callback = null; callback = new EventHandler((s, e) => { lock (signal) { Monitor.Pulse(signal); } event_ -= callback; }); event_ += callback; lock (signal) { action_(); Assert.IsTrue(Monitor.Wait(signal, 10000)); } } However, the compiler prevents from passing event out of the class. Is there a way to achieve that?

    Read the article

  • Why hasn't functional programming taken over yet?

    - by pankrax
    I've read some texts about declarative/functional programming (languages), tried out Haskell as well as written one myself. From what I've seen, functional programming has several advantages over the classical imperative style: Stateless programs; No side effects Concurrency; Plays extremely nice with the rising multi-core technology Programs are usually shorter and in some cases easier to read Productivity goes up (example: Erlang) Imperative programming is a very old paradigm (as far as I know) and possibly not suitable for the 21st century Why are companies using or programs written in functional languages still so "rare"? Why, when looking at the advantages of functional programming, are we still using imperative programming languages? Maybe it was too early for it in 1990, but today?

    Read the article

  • CRT not initialized

    - by jfhs
    I'm trying to compile one project with MSVC 2010, compilation is ok, but when I try to run the app, it gives me CRT not initialized error. It is a console application, so I tried to specify mainCRTStartup as Entry Point, but it didn't help. In the same solution there are other projects, and they don't have such a problem. The difference which I see between them is that one which is not working, uses boost. Boost v1.38.0 if this is important. Runtime Library is Multi-threaded DLL. Linker command line is: /OUT:"D:\temp\ghost\Release\ghost.exe" /INCREMENTAL:NO /NOLOGO /LIBPATH:"..\zlib\lib" /LIBPATH:"..\mysql\lib\opt" /LIBPATH:"..\boost\lib" "ws2_32.lib" "winmm.lib" "zdll.lib" "StormLibRAS.lib" "kernel32.lib" "user32.lib" "gdi32.lib" "winspool.lib" "comdlg32.lib" "advapi32.lib" "shell32.lib" "ole32.lib" "oleaut32.lib" "uuid.lib" "odbc32.lib" "odbccp32.lib" "D:\temp\ghost\bncsutil\vc8_build\Release\BNCSutil.lib" /MANIFEST /ManifestFile:"Release\ghost.exe.intermediate.manifest" /ALLOWISOLATION /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /DEBUG /PDB:"D:\temp\ghost\Release\ghost.pdb" /SUBSYSTEM:CONSOLE /OPT:REF /OPT:ICF /PGD:"D:\temp\ghost\Release\ghost.pgd" /LTCG /TLBID:1 /ENTRY:"mainCRTStartup" /DYNAMICBASE /NXCOMPAT /MACHINE:X86 /ERRORREPORT:QUEUE

    Read the article

  • Are C++ Reads and Writes of an int atomic

    - by theschmitzer
    I have two threads, one updating an int and one reading it. This value is a statistic where the order of the read and write is irrelevant. My question is, do I need to synchronize access to this multi-byte value anyway? Or, put another way, can part of the write be complete and get interrupted, and then the read happen. For example, think of value = ox0000FFFF increment value to 0x00010000 Is there a time where the value looks like 0x0001FFFF that I should be worried about? Certainly the larger the type, the more possible something like this is I've always synchronized these types of accesses, but was curious what the community thought.

    Read the article

  • 1 bug to kill... Letting PHP Generate The Canonical.

    - by Sam
    Hi folks, for building a clean canonical url, that always returns 1 base URL, im stuck in following case: <?php # every page $extensions = $_SERVER['REQUEST_URI']; # path like: /en/home.ast?ln=ja $qsIndex = strpos($extensions, '?'); # removes the ?ln=de part $pageclean = $qsIndex !== FALSE ? substr($extensions, 0, $qsIndex) : $extensions; $canonical = "http://website.com" . $pageclean; # basic canonical url ?> <html><head><link rel="canonical" href="<?=$canonical?>"></head> when URL : http://website.com/de/home.ext?ln=de canonical: http://website.com/de/home.ext BUT I want to remove the file extension aswell, whether its .php, .ext .inc or whatever two or three char extension .[xx] or .[xxx] so the base url becomes: http://website.com/en/home Aaah much nicer! but How do i achieve that in current code? Any hints are much appreciated +! (other advices for proper canonical usage in this multi-lingual environment are welcome as well)

    Read the article

  • .NET Multipage Tiff with Lossy Compression

    - by Adam Berent
    I need a way to take several jpgs and convert them into a single multi page Tiff. I have that working using GDI+ however it only works with the compression LZW which is lossless. This means that my 3 50KB Jpgs turn into 3MB multipage Tiff file. This is not something I can accept for the software that I am working on. I know that Tiff Image format can use a JPG compression scheme but GDI+ does not seem to support this. If anyone knows how to do this in .NET (C#) or of any component that does this conversion.

    Read the article

  • callin' c from lua crashs while reallocating

    - by mkind
    hi folks, i got a crazy error within that for-loop matr=realloc(matr, newmax*sizeof(matr*)); for (i=0; i<newmax; i++){ matr[i]=realloc(matr[i], newmax*sizeof(int)); } matr is a multi-dimension array: int **matr. i need to resize column and row. first line resizes column and the for-loop resizes every row. it worked fine in c. now im working on a library for lua and it crashs here. compilin' works fine as well. but calling from lua crashs with lua: malloc.c:3552: mremap_chunk: Assertion `((size + offset) & (mp_.pagesize-1)) == 0' failed. i have no damn idea since it's working fine using it in c.

    Read the article

  • Can I Use ASP.NET Wizard Control to Insert Data into Multiple Tables?

    - by SidC
    Hello All, I have an ASP.NET 3.5 webforms project written in VB that involves a multi-table SQL Server insert. That is, I want the customer to input all their contact information, order details etc. into one control (thinking wizard control). Then, I want to call a stored procedure that does the insert into the respective database tables. I'm familiar and comfortable with the ASP.NET wizard control. However, all the examples I've seen in my searches pertain to inserting data into one table. Questions: 1. Given a typical order process - customer information, order information, order details - should a wizard control be used to insert data into multiple database tables? If not, what controls/workflow do you suggest? 2. I've set primary keys and indexes on my order details, orders and customers tables. Is there special stored procedure syntax to use to ensure that referential integrity is maintained through the insert process? Thanks, Sid

    Read the article

  • Adding a child node to a JSON node dynamically

    - by Sai
    I have to create a nested multi level json depending on the resultset that I get from MYSQL. I created a json object initially. Now I want to add child nodes to the already child nodes in the object. d = collections.OrderedDict() jsonobj = {"test": dict(updated_at="today", ID="ID", ads=[])} for rows1 in rs: jsonobj['list']["ads"].append(dict(unit = "1", type ="ad_type", id ="123", updated_at="today", x_id="111", x_name="test")) cur.execute("SELECT * from f_test") rs1 = cur.fetchall() for rows2 in rs1: propertiesObj = [] d["name"]="propName" d["type"]="TypeName" d["value"]="Value1" propertiesObj.append(d) jsonobj['play_list']["ads"].append() Here in the above line I want to add another child node to [play_list].[ads] which is a array list again. the output should look like the following [list].[ads].[preferences].

    Read the article

  • Removing whitespace in Java string?

    - by waitinforatrain
    Hi guys, I'm writing a parser for some LISP files. I'm trying to get rid of leading whitespace in a string. The string contents are along the lines of: :FUNCTION (LAMBDA (DELTA PLASMA-IN-0) (IF (OR (>= #61=(+ (* 1 DELTA) PLASMA-IN-0) 100) (<= #61# 0)) PLASMA-IN-0 #61#)) The tabs are all printed as 4 spaces in the file, so I want to get rid of these leading tabs. I tried to do this: string.replaceAll("\\s{4}", " ") - but it had no effect at all on the string. Does anyone know what I'm doing wrong? Is it because it is a multi-line string? Thanks

    Read the article

  • "too many threads error" in blackberry OS-4.5

    - by SWATI
    hi in my application i have 20 icons(bitmap fields) on the home screen When i click on any icon an HTTP request is made in a separate thread. I have used invoke later method wherever necessary to take care of multi-threading problems. But still the number of threads goes beyond 16 and an error pops up indicating too many threads error and applications needs to be restarted!! can anybody tell me how to destroy these threads when they are no longer in use. I don't understand why they don't destroy on their own as usually they do.

    Read the article

  • Good simple C/C++ FTP and SFTP client library recommendation for embedded Linux

    - by Roman Nikitchenko
    Could anyone recommend FTP / SFTP client C/C++ library for Linux-based embedded system? I know about Curl library but I need something as simple as possible just to download files from FTP / SFTP servers. Is there any recommendation to look for? Yes, SFTP support is critical. Actually I can even sacrifice multi-threading because I need only one stream at a time. And I'd like it to be able to work through memory buffers but this should be not a problem. Thank you in advance.

    Read the article

  • openmp vs opencl for computer vision

    - by user1235711
    I am creating a computer vision application that detect objects via a web camera. I am currently focusing on the performance of the application My problem is in a part of the application that generates the XML cascade file using Haartraining file. This is very slow and takes about 6days . To get around this problem I decided to use multiprocessing, to minimize the total time to generate Haartraining XML file. I found two solutions: opencl and (openMp and openMPI ) . Now I'm confused about which one to use. I read that opencl is to use multiple cpu and GPU but on the same machine. Is that so? On the other hand OpenMP is for multi-processing and using openmpi we can use multiple CPUs over the network. But OpenMP has no GPU support. Can you please suggest the pros and cons of using either of the libraries.

    Read the article

< Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >