Search Results

Search found 4860 results on 195 pages for 'parallel extensions'.

Page 162/195 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • C# compiler error CS0006: metadata file is not found

    - by Rob
    I've built a c# compiler using the tutorial on MSDN and a few other resources including here, and I've gotten it to work until I add additional reference assemblies. My errors stem from adding "System.dll" and "System.Windows.Forms.dll" to the ReferenceAssemblies list. here's my code: private void SetUpCompilingParameters() { string ver = string.Format("{0}.{1}.{2}", Environment.Version.Major, Environment.Version.MajorRevision, Environment.Version.Build); string libDir = string.Format(@"{0}", Environment.CurrentDirectory); string raDir = @"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.0"; string exWpfDir = string.Format(@"C:\WINDOWS\Microsoft.NET\Framework\v{0}\WPF", ver); string exDir = string.Format(@"C:\WINDOWS\Microsoft.NET\Framework\v{0}", ver); MyCompiler = new CSharpCodeProvider(); CompilingParam = new CompilerParameters(); CompilingParam.GenerateExecutable = false; CompilingParam.GenerateInMemory = true; CompilingParam.IncludeDebugInformation = false; CompilingParam.TreatWarningsAsErrors = false; CompilingParam.CompilerOptions = string.Format("/lib:{0}", libDir); CompilingParam.CompilerOptions = string.Format("/lib:{0}", raDir); CompilingParam.CompilerOptions = string.Format("/lib:{0}", exDir); //CompilingParam.CompilerOptions = string.Format("/lib:{0}", exWpfDir); CompilingParam.ReferencedAssemblies.Add("System.dll"); CompilingParam.ReferencedAssemblies.Add("System.Windows.Forms.dll"); } As you can see, I've explicitly referenced directories in CompilerOptions but its not helping. I'd like to test the solution on here on stackoverflow that utilizes: CompilingParam.ReferencedAssemblies.Add(typeof(System.Xml.Linq.Extensions).Assembly.Location); but I'm having trouble using it for the general System.dll etc...

    Read the article

  • Why aren't google api clients built on top of Apache's Abdera project ?

    - by lisak
    Hey, Could anybody please explain that to me? As far as I can see, the developers of java's google api client library are reinventing the wheel. It's like writing a new JDK for a Java project. I'm aware of the fact that google data protocol is a little specific re atom publishing, but if one needs to use some of the fancy extensions and features that Apache Abdera project offers for this protocol, it is better not to use google api client library and implement the client from scratch with Abdera... And I'm sure that in a lot of cases its features such as Abdera's JCR adapter would become very handy for google docs, google translator toolkit and others. Now it's great that there is a google api client library to be used for google docs, but what am I going to do with the documents? I believe that in more than a half cases there is also a repository or database on the other side. And in that case, abdera is needed, not the simple google api clients that are only marshalling/unmarshalling the feeds... In fact, there is something to persist in all of the google APIs. It would make sense, if google decided to invest the effort into Abdera enhancement... This doesn't... Also for the question to be more specific: How are you developing google api clients, that need entry persistence (JCR for instance) ? What would be the best way to integrate a google api client library with Apache Abdera ?

    Read the article

  • JQuery Deferred - Adding to the Deferred contract

    - by MgSam
    I'm trying to add another asynchronous call to the contract of an existing Deferred before its state is set to success. Rather than try and explain this in English, see the following pseudo-code: $.when( $.ajax({ url: someUrl, data: data, async: true, success: function (data, textStatus, jqXhr) { console.log('Call 1 done.') jqXhr.pipe( $.ajax({ url: someUrl, data: data, async: true, success: function (data, textStatus, jqXhr) { console.log('Call 2 done.'); }, }) ); }, }), $.ajax({ url: someUrl, data: data, async: true, success: function (data, textStatus, jqXhr) { console.log('Call 3 done.'); }, }) ).then(function(){ console.log('All done!'); }); Basically, Call 2 is dependent on the results of Call 1. I want Call 1 and Call 3 to be executed in parallel. Once all 3 calls are complete, I want the All Done code to execute. My understanding is that Deferred.pipe() is supposed to chain another asynchronous call to the given deferred, but in practice, I always get Call 2 completing after All Done. Does anyone know how to get jQuery's Deferred to do what I want? Hopefully the solution doesn't involve ripping the code apart into chunks any further. Thanks for any help.

    Read the article

  • How do you localize/internationalize an MVC Controller when using a SQL based localization provider?

    - by EBarr
    Hopefully this isn't too silly of a question. In MVC there appears to be plenty of localization support in the views. Once I get to the controller, however, it becomes murky. Using meta:resourcekey="blah" is out, same with <%$ Resources:PageTitle.Text%. ASP.NET MVC - Localization Helpers -- suggested extensions for the Html helper classes like Resource(this Controller controller, string expression, params object[] args). Similarly, Localize your MVC with ease suggested a slightly different extension like Localize(this System.Web.UI.UserControl control, string resourceKey, params object[] args) None of these approaches works while in a controller. I put together the below function and I'm using the controllers full class name as my VirtualPath. But I'm new to MVC and assume there's a better way. public static string Localize (System.Type theType, string resourceKey, params object[] args) { string resource = (HttpContext.GetLocalResourceObject(theType.FullName, resourceKey) ?? string.Empty).ToString(); return mergeTokens(resource, args); } Thoughts? Comments?

    Read the article

  • How to share data between SSRS Security and Data Processing extension?

    - by user2904681
    I've spent a lot of time trying to solve the issue pointed in title and have no found a solution yet. I use MS SSRS 2012 with custom Security (based on Form Authentication and ClaimsPrincipal) and Data Processing extensions. In Data extension level I need to apply filter programmatically based on one of the claim which I have access in Security extension level only. Here is the problem: I do know how to pass the claim from Security to Data Processing extension code... What I've tried: IAuthenticationExtension.LogonUser(string userName, string password, string authority) { ... ClaimsPrincipal claimsPrincipal = CreateClaimsPrincipal(...); Thread.CurrentPrincipal = claimsPrincipal; HttpContext.Current.User = claimsPrincipal; ... }; But it doesn't work. It seems SSRS overrides it within either GenericPrincipal or FormsIdentity internally. The possible workaround I'm thinking about (but haven't checked it yet): 1. Create HttpModule which will create HttpContext with all required information (minus: will be invoke getting claims each time - huge operation) 2. Write to custom SQL table to store logged users information which is required for Data extension and then read it 3. try somehow to append to cookies due to LogOn and then read each time on IAuthenticationExtension.GetUserInfo and fill HttpContext None of them seems to be a good solution. I would be grateful for any help/advise/comments.

    Read the article

  • Ruby Gem Install question + answer(on windows vista Home Basic environment)

    - by Vamsi
    Recently I am having problems with installing rcov gem on my windows (vista Home Basic environment), so after googling I found one solution and that is gem install rcov -v 0.8.1.1.0 #version that installs without errors gem update rcov #update to the latest version, in my case rcov-0.8.1.2.0-x86-mswin32 But this solution didn't worked on my colleague's system (windows xp) and after that we came to know about RubyInstaller devkit for winddows But that dev kit is not working on my vista, when I tried gem install rcov in my command prompt, it game me this error, C:\Users\Vamsi>gem install rcov Building native extensions. This could take a while... ERROR: Error installing rcov: ERROR: Failed to build gem native extension.ERROR: Failed to build gem native extension. D:/Spritle/Programs/Ruby/bin/ruby.exe extconf.rb creating Makefile nmake 'nmake' is not recognized as an internal or external command, operable program or batch file. Gem files will remain installed in D:/Spritle/Programs/Ruby/lib/ruby/gems/1.8/ge ms/rcov-0.9.8 for inspection. Results logged to D:/Spritle/Programs/Ruby/lib/ruby/gems/1.8/gems/rcov-0.9.8/ext /rcovrt/gem_make.out So after that my colleague tried to install nmake as well but it was throwing some other error. Can some one suggest a better solution for solving this problems for all windows environments? I am aware of cygwin for windows but I am not sure that is an 100% solution either.

    Read the article

  • How to prevent external translation of a movieclip object on stage in AS3?

    - by Aaron H.
    I have a MovieClip object, which is exported for actionscript (AS3) in an .swc file. When I place an instance of the clip on the stage without any modifications, it appears in the upper left corner, about half off stage (i.e. only the lower right quadrant of the object is visible). I understand that this is because the clip has a registration point which is not the upper left corner. If you call getBounds() on the movieclip you can get the bounds of the clip (presumably from the "point" that it's aligned on) which looks something like (left: -303, top: -100, right: 303, bottom: 100), you can subtract the left and top values from the clip x and y: clip.x -= bounds.left; clip.y -= bounds.top; This seems to properly align the clip fully on stage with the top left of the clip squarely in the corner of the stage. But! Following that logic doesn't seem to work when aligning it on the center of the stage! clip.x = (stage.stageWidth / 2); etc... This creates the crazy parallel universe where the clip is now down in the lower right corner of the stage. The only clue I have is that looking at: clip.transform.matrix and clip.transform.concatenatedMatrix matrix has a tx value of 748 (half of stage height) ty value of 426 (Half of stage height) concatenatedMatrix has a tx value of 1699.5 and ty value of 967.75 That's also obviously where the movieclip is getting positioned, but why? Where is this additional translation coming from?

    Read the article

  • Can't Install libcurl PHP on Ubuntu Linux

    - by FranticPedantic
    I am trying to use the new facebook api and it requires libcurl PHP. I used sudo apt-get install php5-curl sudo apachectl -k restart And it didn't work. I get the same error and the phpinfo() page says nothing about libcurl. The source of this problem is probably that I built some of the tools from source (apache2, php), but then I got bored so installed a lot of the extensions with the package manager. But I'm not exactly how to go about diagnosing the point of failure. The apt-get install for curl definitely worked, and can be found in /usr/lib/php5/20060613/curl.so I think a lot of my confusion stems from not knowing which files go where, and what purpose they have. Any help would be appreciated, and please tell me if I need to provide more information. edit: The specific error I get is: Exception: Facebook needs the CURL PHP extension. from line if (!function_exists('curl_init')) { throw new Exception('Facebook needs the CURL PHP extension.'); } Ubuntu: 9.10 PHP: 5.2.13 Loaded Configuration File: /etc/php5/apache2/php.ini

    Read the article

  • Problem installing mysql gem on Snow Leopard: uninitialized constant MysqlCompat::MysqlRes

    - by emson
    Hi All I've got a problem trying to install the Ruby mysql gem driver. I recently upgraded to Snow Leopard and did the Hivelogic manual install of MySQL. This all seems to work fine as I can access mysql from the command line and make changes to the database. My problem is that if I now use rake db:migrate I get: rake aborted! uninitialized constant MysqlCompat::MysqlRes (See full trace by running task with --trace) Now it appears that my mysql gem isn't working correctly as I can access MySQL fine from Python using the Python driver (which I compiled to). I therefore tried to rebuild the gem using the following command from this site: http://techliberty.blogspot.com/, (incidentally I am using a recent Intel MacBook Pro): sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/usr/local/mysql/bin/mysql_config This compiles although I get No definition for the documentation: Building native extensions. This could take a while... Successfully installed mysql-2.8.1 1 gem installed Installing ri documentation for mysql-2.8.1... No definition for next_result No definition for field_name ... I'm a little stumped as my mysql_config is located in the correct place: /usr/local/mysql/bin/mysql_config And I have removed all other instances of the mysql gem, from my system. Any suggestions would be greatly appreciated. Many thanks. PS I saw this previous post http://stackoverflow.com/questions/1332207/uninitialized-constant-mysqlcompatmysqlres-using-mms2r-gem but it doesn't seem applicable for my version.

    Read the article

  • Using Rx to synchronize asynchronous events

    - by Martin Liversage
    I want to put Reactive Extensions for .NET (Rx) to good use and would like to get some input on doing some basic tasks. To illustrate what I'm trying to do I have a contrived example where I have an external component with asyncronous events: class Component { public void BeginStart() { ... } public event EventHandler Started; } The component is started by calling BeginStart(). This method returns immediately, and later, when the component has completed startup, the Started event fires. I want to create a synchronous start method by wrapping the component and wait until the Started event is fired. This is what I've come up with so far: class ComponentWrapper { readonly Component component = new Component(); void StartComponent() { var componentStarted = Observable.FromEvent<EventArgs>(this.component, "Started"); using (var startedEvent = new ManualResetEvent(false)) using (componentStarted.Take(1).Subscribe(e => { startedEvent.Set(); })) { this.componenet.BeginStart(); startedEvent.WaitOne(); } } } I would like to get rid of the ManualResetEvent, and I expect that Rx has a solution. But how?

    Read the article

  • getting proxies of the correct type in nhibernate

    - by Nir
    I have a problem with uninitialized proxies in nhibernate The Domain Model Let's say I have two parallel class hierarchies: Animal, Dog, Cat and AnimalOwner, DogOwner, CatOwner where Dog and Cat both inherit from Animal and DogOwner and CatOwner both inherit from AnimalOwner. AnimalOwner has a reference of type Animal called OwnedAnimal. Here are the classes in the example: public abstract class Animal { // some properties } public class Dog : Animal { // some more properties } public class Cat : Animal { // some more properties } public class AnimalOwner { public virtual Animal OwnedAnimal {get;set;} // more properties... } public class DogOwner : AnimalOwner { // even more properties } public class CatOwner : AnimalOwner { // even more properties } The classes have proper nhibernate mapping, all properties are persistent and everything that can be lazy loaded is lazy loaded. The application business logic only let you to set a Dog in a DogOwner and a Cat in a CatOwner. The Problem I have code like this: public void ProcessDogOwner(DogOwner owner) { Dog dog = (Dog)owner.OwnedAnimal; .... } This method can be called by many diffrent methods, in most cases the dog is already in memory and everything is ok, but rarely the dog isn't already in memory - in this case I get an nhibernate "uninitialized proxy" but the cast throws an exception because nhibernate genrates a proxy for Animal and not for Dog. I understand that this is how nhibernate works, but I need to know the type without loading the object - or, more correctly I need the uninitialized proxy to be a proxy of Cat or Dog and not a proxy of Animal. Constraints I can't change the domain model, the model is handed to me by another department, I tried to get them to change the model and failed. The actual model is much more complicated then the example and the classes have many references between them, using eager loading or adding joins to the queries is out of the question for performance reasons. I have full control of the source code, the hbm mapping and the database schema and I can change them any way I want (as long as I don't change the relationships between the model classes). I have many methods like the one in the example and I don't want to modify all of them. Thanks, Nir

    Read the article

  • SQL Server insert performance

    - by Jose
    I have an insert query that gets generated like this INSERT INTO InvoiceDetail (LegacyId,InvoiceId,DetailTypeId,Fee,FeeTax,Investigatorid,SalespersonId,CreateDate,CreatedById,IsChargeBack,Expense,RepoAgentId,PayeeName,ExpensePaymentId,AdjustDetailId) VALUES(1,1,2,1500.0000,0.0000,163,1002,'11/30/2001 12:00:00 AM',1116,0,550.0000,850,NULL,@ExpensePay1,NULL); DECLARE @InvDetail1 INT; SET @InvDetail1 = (SELECT @@IDENTITY); This query is generated for only 110K rows. It takes 30 minutes for all of these query's to execute I checked the query plan and the largest % nodes are A Clustered Index Insert at 57% query cost which has a long xml that I don't want to post. A Table Spool which is 38% query cost <RelOp AvgRowSize="35" EstimateCPU="5.01038E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="1" LogicalOp="Eager Spool" NodeId="80" Parallel="false" PhysicalOp="Table Spool" EstimatedTotalSubtreeCost="0.0466109"> <OutputList> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvoiceId" /> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvestigatorId" /> <ColumnReference Column="Expr1054" /> <ColumnReference Column="Expr1055" /> </OutputList> <Spool PrimaryNodeId="3" /> </RelOp> So my question is what is there that I can do to improve the speed of this thing? I already run ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL Before the queries and then ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL after the queries. And that didn't shave off hardly anything off of the time. Know I am running these queries in a .NET application that uses a SqlCommand object to send the query. I then tried to output the sql commands to a file and then execute it using sqlcmd, but I wasn't getting any updates on how it was doing, so I gave up on that. Any ideas or hints or help?

    Read the article

  • Renaming TurboGears 2's Repoze Fields with TGAdmin

    - by William Chambers
    I've been working on renaming TurboGears 2's Repoze 'groups' field to 'roles' to free the namespace and db tables for other purposes. Also roles makes much more sense to me then groups because I have a strong Drupal background. Now I have found some of the docs to do this such as these: http://www.turbogears.org/2.1/docs/main/Auth/Customization.html#customizing-the-model-structure-assumed-by-the-quickstart http://code.gustavonarea.net/repoze.what-quickstart/#customizing-the-model-definition However these only go part of the way. I have made (I'm pretty sure at least, I've double checked a few times.) all the changes required as you can see in this diff. This seems to work fine however I've ran into a rather major issue with the TurboGears Admin system. I've tried http://turbogears.org/2.0/docs/main/Extensions/Admin/index.html and it didn't seem to make any difference, however I'm not 100% sure I did it correctly. The problem occurs when I attempt to go to localhost/admin/permissions/. It causes a Internal Server Error and outputs the following error. http://pastebin.com/YWMH3SiU This error does not happen on the Roles/Users pages and the permissions /edit/1 also works. I'm running kubuntu 10.04 with TG 2.1b2. (I'm running the beta mostly for easier mako support which is really important.) Any help would be very appreciated.

    Read the article

  • iPhone static library Clang/LLVM error: non_lazy_symbol_pointers

    - by Bekenn
    After several hours of experimentation, I've managed to reduce the problem to the following example (C++): extern "C" void foo(); struct test { ~test() { } }; void doTest() { test t; // 1 foo(); // 2 } This is being compiled for iOS devices in XCode 4.2, using the provided Clang compiler (Apple LLVM compiler 3.0) and the iOS 5.0 SDK. The project is configured as a Cocoa Touch Static Library, and "Enable Linking With Shared Libraries" is set to No because I'm building an AIR native extension. The function foo is defined in another external library. (In my actual project, this would be any of the C API functions defined by Adobe for use in AIR native extensions.) When attempting to compile this code, I get back the error: FATAL:incompatible feature used: section type non_lazy_symbol_pointers (must specify "-dynamic" to be used) clang: error: assembler command failed with exit code 1 (use -v to see invocation) The error goes away if I comment out either of the lines marked 1 or 2 above, or if I change the build setting "Enable Linking With Shared Libraries" to Yes. (However, if I change the build setting, then I get multiple ld warning: unexpected srelocation type 9 warnings when linking the library into the final project, and the application crashes when running on the device.) The build error also goes away if I remove the destructor from test. So: Is this a bug in Clang? Am I missing some all-important and undocumented build setting? The interaction between an externally-provided function and a struct with a destructor is very peculiar, to say the least.

    Read the article

  • Set up Gitosis, but can't clone

    - by Tim Rupe
    I've set up Gitosis on a remote Ubuntu box which I will refer to as linuxserver as my host in the following commands. I'm also connecting from a Windows box using Cygwin. I followed the instructions according to: http://scie.nti.st/2007/11/14/hosting-git-repositories-the-easy-and-secure-way I had no problems up until I needed to clone the gitosis-admin repository to my local machine git clone git@linuxserver:gitosis-admin.git When I do this, the command executes, but hangs there displaying nothing until I ctrl-c to get back to a command prompt. No messages are displayed at all. I'm pretty sure I have my ssh keys set up properly, because logging in using "ssh linuxserver" into my regular account works perfectly without asking for a password. Edit: Over the weekend I set up a near identical Ubuntu box at home, and had no problem setting up Gitosis. The only difference was that I was connecting from OSX instead of Cygwin. Edit: I've also discovered that when using the Bash Shell provided with "Git Extensions", I have no problems, so the issue definitely seems to be some kind of Cygwin conflict. Edit: Just an update, but about a month after posting this question, I switched to Mercurial, and found that I prefer it much more than git. Thanks for the suggestions, but I don't plan on going back to git to try any of them out.

    Read the article

  • Delphi dbExpress and Interbase: UTF8 migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally.

    Read the article

  • How can I make this method more Scalalicious

    - by Neil Chambers
    I have a function that calculates the left and right node values for some collection of treeNodes given a simple node.id, node.parentId association. It's very simple and works well enough...but, well, I am wondering if there is a more idiomatic approach. Specifically is there a way to track the left/right values without using some externally tracked value but still keep the tasty recursion. /* * A tree node */ case class TreeNode(val id:String, val parentId: String){ var left: Int = 0 var right: Int = 0 } /* * a method to compute the left/right node values */ def walktree(node: TreeNode) = { /* * increment state for the inner function */ var c = 0 /* * A method to set the increment state */ def increment = { c+=1; c } // poo /* * the tasty inner method * treeNodes is a List[TreeNode] */ def walk(node: TreeNode): Unit = { node.left = increment /* * recurse on all direct descendants */ treeNodes filter( _.parentId == node.id) foreach (walk(_)) node.right = increment } walk(node) } walktree(someRootNode) Edit - The list of nodes is taken from a database. Pulling the nodes into a proper tree would take too much time. I am pulling a flat list into memory and all I have is an association via node id's as pertains to parents and children. Adding left/right node values allows me to get a snapshop of all children (and childrens children) with a single SQL query. The calculation needs to run very quickly in order to maintain data integrity should parent-child associations change (which they do very frequently). In addition to using the awesome Scala collections I've also boosted speed by using parallel processing for some pre/post filtering on the tree nodes. I wanted to find a more idiomatic way of tracking the left/right node values. After looking at the answers listed I have settled on this synthesised version: def walktree(node: TreeNode) = { def walk(node: TreeNode, counter: Int): Int = { node.left = counter node.right = treeNodes .filter( _.parentId == node.id) .foldLeft(counter+1) { (counter, curnode) => walk(curnode, counter) + 1 } node.right } walk(node,1) }

    Read the article

  • Understanding F# Asynchronous Programming

    - by Yin Zhu
    I kind of know the syntax of asynchronous programming in F#. E.g. let downloadUrl(url:string) = async { let req = HttpWebRequest.Create(url) // Run operation asynchronously let! resp = req.AsyncGetResponse() let stream = resp.GetResponseStream() // Dispose 'StreamReader' when completed use reader = new StreamReader(stream) // Run asynchronously and then return the result return! reader.AsyncReadToEnd() } In F# expert book (and many other sources), they say like let! var = expr simply means "perform the asynchronous operation expr and bind the result to var when the operation completes. Then continue by executing the rest of the computation body" I also know that a new thread is created when performing async operation. My original understanding was that there are two parallel threads after the async operation, one doing I/O and one continuing to execute the async body at the same time. But in this example, I am confused at let! resp = req.AsyncGetResponse() let stream = resp.GetResponseStream() What happens if resp has not started yet and the thread in the async body wants to GetResponseStream? Is this a possible error? So maybe my original understanding was wrong. The quoted sentences in the F# expert book actually means that "creating a new thread, hang the current thread up, when the new thread finishes, wake up the body thread and continue", but in this case I don't see we could save any time. In the original understanding, the time is saved when there are several independent IO operations in one async block so that they could be done at the same time without intervention with each other. But here, if I don't get the response, I cannot create the stream; only I have stream, I can start reading the stream. Where's the time gained?

    Read the article

  • Clean up domain list in Excel - regex / macros?

    - by Tim
    I have a huge spreadsheet of domains that I need to clean up as follows: Remove all http:// (simple replace all - "http://" with "") Remove any www. (simple replace all - "www." with "") Delete any sub-domains (delete the actual row completely, not just the subdomain from the url) Remove anything after the domain extension (i.e. website.com/blah/blahbah/ becomes just website.com (simple replace all - "/*" with "", then replace all "/" with "") So what I'm left with is just a spreadsheet of clean domains like "website.com". I think I've got 1, 2 and 4 sorted (as above), but I'm really struggling with 3. Any ideas? Can I do this with regexp / vba, and actually delete the row completely? Sample data: http://www.scholastic.com/kids/stacks/games/ http://imgworld.teamworkonline.com/ http://topfreegraphics.com/ http://www.workcircle.co.uk/ http://www.healthycanadians.gc.ca/index-eng.php http://gsociology.icaap.org/methods/soft.html Post 1, 2 and 4 would leave me with: scholastic.com imgworld.teamworkonline.com topfreegraphics.com workcircle.co.uk healthycanadians.gc.ca gsociology.icaap.org It's those pesky sub-domains I need to just delete completely, just delete the row. I've realised I can't just search for 2 x ".", because obviously plenty of domain extensions (i.e .co.uk) include that. Any help appreciated.

    Read the article

  • How to merge duplicates in 2D python arrays

    - by Wei Lou
    Hi, I have a set of data similar to this: No Start Time End Time CallType Info 1 13:14:37.236 13:14:53.700 Ping1 RTT(Avr):160ms 2 13:14:58.955 13:15:29.984 Ping2 RTT(Avr):40ms 3 13:19:12.754 13:19:14.757 Ping3_1 RTT(Avr):620ms 3 13:19:12.754 Ping3_2 RTT(Avr):210ms 4 13:14:58.955 13:15:29.984 Ping4 RTT(Avr):360ms 5 13:19:12.754 13:19:14.757 Ping1 RTT(Avr):40ms 6 13:19:59.862 13:20:01.522 Ping2 RTT(Avr):163ms ... when i parse through it, i need merge the results of Ping3_1 and Ping3_2. Then take average of those two row export as one row. So the end of result would be like this: No Start Time End Time CallType Info 1 13:14:37.236 13:14:53.700 Ping1 RTT(Avr):160ms 2 13:14:58.955 13:15:29.984 Ping2 RTT(Avr):40ms 3 13:19:12.754 13:19:14.757 Ping3 RTT(Avr):415ms 4 13:14:58.955 13:15:29.984 Ping4 RTT(Avr):360ms 5 13:19:12.754 13:19:14.757 Ping1 RTT(Avr):40ms 6 13:19:59.862 13:20:01.522 Ping2 RTT(Avr):163ms currently i am concatenating column 0 and 1 to make a unique key, find duplication there then doing rest of special treatment for those parallel Pings. It is not elegant at all. Just wonder what is the better way to do it. Thanks!

    Read the article

  • Debian packaging of a Python package.

    - by chrisdew
    I need to write (or find) a script to create a Debian package (using python-support) from a Python package. The Python package will be pure Python (no C extensions). The Python package (for testing purposes) will just be a directory with an empty __init__.py file and a single Python module, package_test.py. The packaging script must use python-support to provide the correct bytecode for possible multiple installations of Python on a target platform. (i.e. v2.5 and v2.6 on Ubuntu Jaunty.) Most advice I find while googling are just examples nasty hacks that don't even use python-support or python-central. I have so far spent hours researching this, and the best I can come up with is to hack around the script from an existing open source project - but I don't know which bits are required for what I'm doing. Has anyone here made a Debian package out of a Python package in a reasonably non-hacky way? I'm starting to think that it will take me more than a week to go from no knowledge of Debian packaging and python-support to getting a working script. How long has it taken others? Any advice? Chris.

    Read the article

  • How to lazy load scripts in YUI that accompany ajax html fragments

    - by Chris Beck
    I have a web app with Tabs for Messages and Contacts (think gmail). Each Tab has a Y.on('click') event listener that retrieves an HTML fragment via Y.io() and renders the fragment via node.setContent(). However, the Contact Tab also requires contact.js to enable an Edit button in the fragment. How do I defer the cost of loading contact.js until the user clicks on the Contacts tab? How should contact.js add it's listener to the Edit button? The Complete function of my Tab's on('click') event could serialize Get.script('contact.js') after Y.io('fragment'). However, for better performance, I would prefer to download the script in parallel to downloading the HTML fragment. In that case, I would need to defer adding an event listener to the Edit button until the Edit button is available. This seems like a common RIA design pattern. What is the best way to implement this with YUI? How should I get the script? How should I defer sections of the script until elements in the fragment are available in the DOM?

    Read the article

  • Communication between EJB3 Instances (JEE inter-bean communication) possible?

    - by Hank
    I'm designing a part of a JEE6 application, consisting of EJB3 beans. Part of the requirements are multiple parallel (say a few hundred) long running (over days) database hunts. Individual hunts have different search parameters (start time, end time, query filter). Parameters may get changed over time. Currently I'm thinking of the following: SearchController (Stateless Session Bean) formulates a set of search parameters, sends it off to a SearchListener via JMS SearchListener (Message Driven Bean) receives search parameters, instantiates a SearchWorker with the parameters SearchWorker (SLSB) hunts repeatedly through the database; when it finds something, the result is sent off via JMS, and the search continues; when the given 'end-time' has reached, it ends What I'm wondering now: Is there a problem, with EJB3 instances running for days? (Other than that I need to be able to deal with container restarts...) How do I know how many and which EJB instances of SearchWorker are currently running? Is it possible to communicate with them individually (similar to sending a System V signal to a unix process), e.g. to send new parameters, to end an instance, etc..

    Read the article

  • Should I use formal methods on my software project?

    - by Michael
    Our client wants us to build a web-based, rich internet application for gathering software requirements. Basically it's a web-based case tool that follows a specific process for getting requirements from stakeholders. I'm the project manager and we're still in the early phases of the project. I've been thinking about using formal methods to help clarify the requirements for the tool for both my client and the developers. By formal methods I mean some form of modeling, possibly something mathematically-based. Some of the things I've read about and are considering include Z (http://en.wikipedia.org/wiki/Z_notation), state machines, UML 2.0 (possibly with extensions such as OCL), Petri nets, and some coding-level stuff like contracts and pre and post conditions. Is there anything else I should consider? The developers are experienced but depending on the formalism used they may have to learn some math. I'm trying to determine whether it's worth while for me to use formal methods on this project and if so, to what extent. I know "it depends" so the most helpful answers for me is a yes/no and supporting arguments. Would you use formal methods if you were on this project?

    Read the article

  • howto distinguish composition and self-typing use-cases

    - by ayvango
    Scala has two instruments for expressing object composition: original self-type concept and well known trivial composition. I'm curios what situations I should use which in. There are obvious differences in their applicability. Self-type requires you to use traits. Object composition allows you to change extensions on run-time with var declaration. Leaving technical details behind I can figure two indicators to help with classification of use cases. If some object used as combinator for a complex structure such as tree or just have several similar typed parts (1 car to 4 wheels relation) than it should use composition. There is extreme opposite use case. Lets assume one trait become too big to clearly observe it and it got split. It is quite natural that you should use self-types for this case. That rules are not absolute. You may do extra work to convert code between this techniques. e.g. you may replace 4 wheels composition with self-typing over Product4. You may use Cake[T <: MyType] {part : MyType} instead of Cake { this : MyType => } for cake pattern dependencies. But both cases seem counterintuitive and give you extra work. There are plenty of boundary use cases although. One-to-one relations is very hard to decide with. Is there any simple rule to decide what kind of technique is preferable? self-type makes you classes abstract, composition makes your code verbose. self-type gives your problems with blending namespaces and also gives you extra typing for free (you got not just a cocktail of two elements but gasoline-motor oil cocktail known as a petrol bomb). How can I choose between them? What hints are there? Update: Let us discuss the following example: Adapter pattern. What benefits it has with both selt-typing and composition approaches?

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >