Search Results

Search found 4646 results on 186 pages for 'multi'.

Page 131/186 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Split table and insert with identity link

    - by The King
    Hi.. I have 3 tables similar to the sctructure below CREATE TABLE [dbo].[EmpBasic]( [EmpID] [int] IDENTITY(1,1) NOT NULL Primary Key, [Name] [varchar](50), [Address] [varchar](50) ) CREATE TABLE [dbo].[EmpProject]( [EmpID] [int] NOT NULL primary key, // referencing column with EmpBasic [EmpProject] [varchar](50) ) CREATE TABLE [dbo].[EmpFull_Temp]( [ObjectID] [int] IDENTITY(1,1) NOT NULL Primary Key, [T1Name] [varchar](50) , [T1Address] [varchar](50) , [T1EmpProject] [varchar](50) ) The EmpFull_Temp table has the records with a dummy object ID column... I want to populate the first 2 tables with the records in this table... But with EmpID as a reference between the first 2 tables. I tried this in a stored procedure... Create Table #IDSS (EmpID bigint, objID bigint) Insert into EmpBasic output Inserted.EmpID, EmpFull_Temp.ObjectID into #IDSS Select T1Name, T1Address from EmpFull_Temp Where ObjectID < 106 Insert into EmpProject Select A.EmpID, B.T1EmpProject from #IDSS as A, EmpFull_Temp as B Where A.ObjID = B.ObjectID But it says.. The multi-part identifier "EmpFull_Temp.ObjectID" could not be bound. Could you please help me in achieving this...

    Read the article

  • jquery plugins for truncating long text string by container width/height

    - by ShiVik
    Hello all As the title says, I want to truncate user-input text string based on the width and height of a designated container. My specification is to truncate the string, display some message like Read More at the end and when user clicks on it, the text slides down. UPDATE: Ah! Forgot one thing. It should handle multi-byte characters as well. Can somebody throw some light on what options do I have? jQuery plugins or some nifty jquery snippet? Thanks and Regards

    Read the article

  • How can we get unique elements from any ORDER BY DECREASING OR INCREASING

    - by Mohit
    Code given below is taken from the stackoverflow.com !!! Can anyone tell me how to get the array elements order by decreaseing or increasing !! plz help me !!! Thanks in advance $contents = file_get_contents($htmlurl); // Get rid of style, script etc $search = array('@<script[^>]*?>.*?</script>@si', // Strip out javascript '@<head>.*?</head>@siU', // Lose the head section '@<style[^>]*?>.*?</style>@siU', // Strip style tags properly '@<![\s\S]*?--[ \t\n\r]*>@' // Strip multi-line comments including CDATA ); $contents = preg_replace($search, '', $contents); $result = array_count_values( str_word_count( strip_tags($contents), 1 ) ); print_r($result);

    Read the article

  • Validating an Autocomplete field in Django

    - by anonymous coward
    I have models similar to the following: class Band(models.Model): name = models.CharField(unique=True) class Event(models.Model): name = models.CharField(max_length=50, unique=True) bands = models.ManyToManyField(Band) and essentially I want to use the validation capability offered by a ModelForm that already exists for Event, but I do not want to show the default Multi-Select list (for 'bands') on the page, because the potential length of the related models is extremely long. I have the following form defined: class AddEventForm(ModelForm): class Meta: model = Event fields = ('name', ) Which does what is expected for the Model, but of course, validation could care less about the 'bands' field. I've got it working enough to add bands correctly, but there's no correct validation, and it will simply drop bad band IDs. What should I do so that I can ensure that at least one (correct) band ID has been sent along with my form? For how I'm sending the band-IDs with auto-complete, see this related question: http://stackoverflow.com/questions/1528059/

    Read the article

  • Export Multiple Sheets to Excel Through Browser

    - by ProfK
    I need to export multiple data tables to Excel on the clients machine, each to their own sheet. If it was just one sheet, I'd use the Excel/csv content type, but I've heard something about an XML format that can represent an entire workbook. I don't want to go down the Packaging and .xlsx route, so I need standard .xls. Our bug tracker, Gemini, used to have an export function that produced an XML file that Excel automatically opened as a multi-sheet workbook, but I can't find it. Is there still such a mechanism, and where can I find that schema?

    Read the article

  • How to deal with the new line character in the Silverlight TextBox

    - by Ian Oakes
    When using a multi-line TextBox (AcceptsReturn="True") in Silverlight, line feeds are recorded as \r rather than \r\n. This is causing problems when the data is persisted and later exported to another format to be read by a Windows application. I was thinking of using a regular expression to replace any single \r characters with a \r\n, but I suck at regex's and couldn't get it to work. Because there may be a mixture of line endings just blindy replacing all \r with \r\n doesn't cut it. So two questions really... If regex is the way to go what's the correct pattern? Is there a way to get Silverlight to respect it's own Environment.NewLine character in TextBox's and have it insert \r\n rather just a single \r?

    Read the article

  • How to send mail with large size attachment using System.Net.Mail to Google Apps ?

    - by Preeti
    Hi, I am trying to send mail with large size attachment upto (1MB,2MB). But sending mail fails.(Sending to Google Apps) as: MailItemEntry[] entries = new MailItemEntry[1]; String EmlPath = "C:\\testemail.eml"; String msg = File.ReadAllText(EmlPath); entries[0] = new MailItemEntry(); entries[0].Rfc822Msg = new Rfc822MsgElement(msg); How can i divide attachments into multi part? Exception I am getting while migrating this EML to Google apps is: {"The request was aborted: The request was canceled."}

    Read the article

  • What Use are Threads Outside of Parallel Problems on MultiCore Systesm?

    - by Robert S. Barnes
    Threads make the design, implementation and debugging of a program significantly more difficult. Yet many people seem to think that every task in a program that can be threaded should be threaded, even on a single core system. I can understand threading something like an MPEG2 decoder that's going to run on a multicore cpu ( which I've done ), but what can justify the significant development costs threading entails when you're talking about a single core system or even a multicore system if your task doesn't gain significant performance from a parallel implementation? Or more succinctly, what kinds of non-performance related problems justify threading? Edit Well I just ran across one instance that's not CPU limited but threads make a big difference: TCP, HTTP and the Multi-Threading Sweet Spot Multiple threads are pretty useful when trying to max out your bandwidth to another peer over a high latency network connection. Non-blocking I/O would use significantly less local CPU resources, but would be much more difficult to design and implement.

    Read the article

  • How to get site context/information during the PreapplicationStartMethod

    - by Mike
    When you run the same web based application as a multi-tenant application for different clients is there a way during the PreapplicationStartMethod to gain some kind of context to the site that is being started? More specifically I'd like to get the host header information (the "bindingInformation" attribute value from the applicationHost.config); I have found ways to get this information at the time of a specific request long after the application has started. Is there a way to get the information during the application startup process? This is an MVC 3 application and IIS 7.5.

    Read the article

  • Cilk or Cilk++ or OpenMP

    - by Aman Deep Gautam
    I'm creating a multi-threaded application in Linux. here is the scenario: Suppose I am having x instance of a class BloomFilter and I have some y GB of data(greater than memory available). I need to test membership for this y GB of data in each of the bloom filter instance. It is pretty much clear that parallel programming will help to speed up the task moreover since I am only reading the data so it can be shared across all processes or threads. Now I am confused about which one to use Cilk, Cilk++ or OpenMP(which one is better). Also I am confused about which one to go for Multithreading or Multiprocessing

    Read the article

  • jQuery UI sortable on select options

    - by user1038814
    I'm trying to get jQuery UI default sortable to work on options in a select multi list box but can't seem to get it working. Can this work with a select option? I've only seen examples with <li> everywhere. Here's my JavaScript: $(function() { $( "#secondSelectms2side__dx" ).sortable(); $( "#secondSelectms2side__dx" ).disableSelection(); }); And the HTML: <select title="" name="secondSelectms2side__dx" id="secondSelectms2side__dx" size="8" multiple="multiple"> <option value="4">asdsdsds</option> <option value="10">bsdsdsdsd</option> <option value="2">csdsdsds</option> </select>? My code is on jsFiddle: http://jsfiddle.net/noscirre/DRUPe/

    Read the article

  • What's the fastest way to scrape a lot of pages in php?

    - by Yegor
    I have a data aggregator that relies on scraping several sites, and indexing their information in a way that is searchable to the user. I need to be able to scrape a vast number of pages, daily, and I have ran into problems using simple curl requests, that are fairly slow when executed in rapid sequence for a long time (the scraper runs 24/7 basically). Running a multi curl request in a simple while loop is fairly slow. I speeded it up by doing individual curl requests in a background process, which works faster, but sooner or later the slower requests start piling up, which ends up crashing the server. Are there more efficient ways of scraping data? perhaps command line curl?

    Read the article

  • What would be a good "CMS" for me to use?

    - by Tim Geerts
    Hey, I'm looking for some sort of CMS system to implement here in terms of "documentation" system. Now, I'm not to sure about which system(s) would suit my needs best, so I thought I'd come here and type up my requirements so you could help me in narrowing down all the different options. One important note to make is that I'm not looking at a system where I can store certain documents (word, pdf, whatever). Rather at a system where I can type the "documentation"-text in some sort of post (like a blog). Requirements: - Multilanguage support - Tagging - Decent search support (tags, groupings, categories) - Version-control of posts/articles - Possibility of exporting post(s) to a pdf file - Support for multi-user (usergroup X can only see those posts, usergroup Y can see others, etc...) I know, these are some strange requirements if they're all combined, and I reckon most of you would perhaps say that I'd have to develop something like this inhouse rather then finding a descent working product out there (open source if possible). None the less, I thought I'd at least ask the opinion of y'all. Regards, Tim

    Read the article

  • With Maven2, how would I specify a custom directory to which a dependency should be copied?

    - by Benny
    Basically, I have a multi-module project consisting of 5 different modules. One of the modules is kind of the parent module to the other 4, meaning the other 4 need to be built before the 5th, so you could say that each of the 4 modules is a dependency of the 5th. Thus, I've made dependency entries for each of the modules in the 5th module's pom.xml. However, when I build the project, I don't want those 4 dependencies copied to the "lib" directory of the 5th module. I'd like to specify the directory into which each of them should be placed explicitly. Is there any way to do this with Maven2? Thanks for your help, B.J.

    Read the article

  • How should my team decide between 3-tier and 2-tier architectures?

    - by j0rd4n
    My team is discussing the future direction we take our projects. Half the team believes in a pure 3-tier architecture while the other half favors a 2-tier architecture. Project Assumptions: Enterprise business applications Business logic needed between user and database Data validation necessary Service-oriented (prefer RESTful services) Multi-year maintenance plan Support hundreds of users 3-tier Team Favors: Persistant layer <== Domain layer <== UI layer Service boundary between at least persistant layer and domain layer. Domain layer might have service boundary between it. Translations between each layer (clean DTO separation) Hand roll persistance unless we can find creative yet elegant automation 2-tier Team Favors: Entity Framework + WCF Data Service layer <== UI layer Business logic kept in WCF Data Service interceptors Minimal translation between layers - favor faster coding So that's the high-level argument. What considerations should we take into account? What experiences have you had with either approach?

    Read the article

  • HTTP request stream not readable outside of request handler

    - by Jason Young
    I'm writing a fairly complicated multi-node proxy, and at one point I need to handle an HTTP request, but read from that request outside of the "http.Server" callback (I need to read from the request data and line it up with a different response at a different time). The problem is, the stream is no longer readable. Below is some simple code to reproduce the issue. Is this normal, or a bug? function startServer() { http.Server(function (req, res) { req.pause(); checkRequestReadable(req); setTimeout(function() { checkRequestReadable(req); }, 1000); setTimeout(function() { res.end(); }, 1100); }).listen(1337); console.log('Server running on port 1337'); } function checkRequestReadable(req) { //The request is not readable here! console.log('Request writable? ' + req.readable); } startServer();

    Read the article

  • Can in-memory SQLite databases scale with concurrency?

    - by Kent Boogaart
    In order to prevent a SQLite in-memory database from being cleaned up, one must use the same connection to access the database. However, using the same connection causes SQLite to synchronize access to the database. Thus, if I have many threads performing reads against an in-memory database, it is slower on a multi-core machine than the exact same code running against a file-backed database. Is there any way to get the best of both worlds? That is, an in-memory database that permits multiple, concurrent calls to the database?

    Read the article

  • Build Pipelining and Continuous Integration with Maven and Hudson

    - by Brandon
    Currently the my team is considering splitting our single CI build process into a more streamlined multi-stage process to speed up basic build feedback and isolate different ci concerns. The idea we had was to have each stage exist in Hudson as a different build with the correct maven goal or maven plugin execution, then chain them together using the post-build hooks of Hudson. However to my knowledge, Maven as a build tool mandates that any lifecycle phase which is performed automatically builds every preceding lifecycle phase. This presents a number of problems the most significant of which is that maven is recreating the build resources with each distinct call and not using those of the previous stage. This not only breaks the consistency of the build lifecycle but has much more unnecessary processing overhead. Is there a way to accomplish pipelining with CI using Maven? Assuming there is, is there a way to let Hudson know to use those resources built from the previous stage in the next one?

    Read the article

  • Most efficient way of checking for a return from a function call in Perl

    - by Gaurav Dadhania
    I want to add the return value from the function call to an array iff something is returned (not by default, i.e. if I have a return statement in the subroutine.) so I'm using unshift @{$errors}, "HashValidator::$vfunction($hashref)"; but this actually adds the string of the function call to the array. I also tried unshift @{$errors}, $temp if defined my $temp = "HashValidator::$vfunction($hashref)"; with the same result. What would a perl one-liner look like that does this efficiently (I know I can do the ugly, multi-line check but I want to learn). Thanks,

    Read the article

  • How to pass around event as parameter in c#

    - by Jerry Liu
    Am writing unit test for a multi-threading application, where I need to wait until a specific event triggered so that I know the asyn operation is done. E.g. When I call repository.add(something), I wait for event AfterChange before doing any assertion. So I write a util function to do that. public static void SyncAction(EventHandler event_, Action action_) { var signal = new object(); EventHandler callback = null; callback = new EventHandler((s, e) => { lock (signal) { Monitor.Pulse(signal); } event_ -= callback; }); event_ += callback; lock (signal) { action_(); Assert.IsTrue(Monitor.Wait(signal, 10000)); } } However, the compiler prevents from passing event out of the class. Is there a way to achieve that?

    Read the article

  • Can in-memory SQLite databases be used concurrently?

    - by Kent Boogaart
    In order to prevent a SQLite in-memory database from being cleaned up, one must use the same connection to access the database. However, using the same connection causes SQLite to synchronize access to the database. Thus, if I have many threads performing reads against an in-memory database, it is slower on a multi-core machine than the exact same code running against a file-backed database. Is there any way to get the best of both worlds? That is, an in-memory database that permits multiple, concurrent calls to the database?

    Read the article

  • Why hasn't functional programming taken over yet?

    - by pankrax
    I've read some texts about declarative/functional programming (languages), tried out Haskell as well as written one myself. From what I've seen, functional programming has several advantages over the classical imperative style: Stateless programs; No side effects Concurrency; Plays extremely nice with the rising multi-core technology Programs are usually shorter and in some cases easier to read Productivity goes up (example: Erlang) Imperative programming is a very old paradigm (as far as I know) and possibly not suitable for the 21st century Why are companies using or programs written in functional languages still so "rare"? Why, when looking at the advantages of functional programming, are we still using imperative programming languages? Maybe it was too early for it in 1990, but today?

    Read the article

  • .NET Multipage Tiff with Lossy Compression

    - by Adam Berent
    I need a way to take several jpgs and convert them into a single multi page Tiff. I have that working using GDI+ however it only works with the compression LZW which is lossless. This means that my 3 50KB Jpgs turn into 3MB multipage Tiff file. This is not something I can accept for the software that I am working on. I know that Tiff Image format can use a JPG compression scheme but GDI+ does not seem to support this. If anyone knows how to do this in .NET (C#) or of any component that does this conversion.

    Read the article

  • Can I Use ASP.NET Wizard Control to Insert Data into Multiple Tables?

    - by SidC
    Hello All, I have an ASP.NET 3.5 webforms project written in VB that involves a multi-table SQL Server insert. That is, I want the customer to input all their contact information, order details etc. into one control (thinking wizard control). Then, I want to call a stored procedure that does the insert into the respective database tables. I'm familiar and comfortable with the ASP.NET wizard control. However, all the examples I've seen in my searches pertain to inserting data into one table. Questions: 1. Given a typical order process - customer information, order information, order details - should a wizard control be used to insert data into multiple database tables? If not, what controls/workflow do you suggest? 2. I've set primary keys and indexes on my order details, orders and customers tables. Is there special stored procedure syntax to use to ensure that referential integrity is maintained through the insert process? Thanks, Sid

    Read the article

  • CRT not initialized

    - by jfhs
    I'm trying to compile one project with MSVC 2010, compilation is ok, but when I try to run the app, it gives me CRT not initialized error. It is a console application, so I tried to specify mainCRTStartup as Entry Point, but it didn't help. In the same solution there are other projects, and they don't have such a problem. The difference which I see between them is that one which is not working, uses boost. Boost v1.38.0 if this is important. Runtime Library is Multi-threaded DLL. Linker command line is: /OUT:"D:\temp\ghost\Release\ghost.exe" /INCREMENTAL:NO /NOLOGO /LIBPATH:"..\zlib\lib" /LIBPATH:"..\mysql\lib\opt" /LIBPATH:"..\boost\lib" "ws2_32.lib" "winmm.lib" "zdll.lib" "StormLibRAS.lib" "kernel32.lib" "user32.lib" "gdi32.lib" "winspool.lib" "comdlg32.lib" "advapi32.lib" "shell32.lib" "ole32.lib" "oleaut32.lib" "uuid.lib" "odbc32.lib" "odbccp32.lib" "D:\temp\ghost\bncsutil\vc8_build\Release\BNCSutil.lib" /MANIFEST /ManifestFile:"Release\ghost.exe.intermediate.manifest" /ALLOWISOLATION /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /DEBUG /PDB:"D:\temp\ghost\Release\ghost.pdb" /SUBSYSTEM:CONSOLE /OPT:REF /OPT:ICF /PGD:"D:\temp\ghost\Release\ghost.pgd" /LTCG /TLBID:1 /ENTRY:"mainCRTStartup" /DYNAMICBASE /NXCOMPAT /MACHINE:X86 /ERRORREPORT:QUEUE

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >