Search Results

Search found 5490 results on 220 pages for 'quick'.

Page 184/220 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • expand floating object when floating object within expands

    - by Scarface
    Hey guys, quick question, I have a link when clicked drops down a list. This list is floated to the right to position it properly. This list is in another box that has been floated. My problem is that when the list expands, the box does not and the list comes out of the container box, unless the list is not floated. However floating it seems like the only way to get it to the position I want. If anyone has any ideas on how to solve this problem I would appreciate it. .container-box { margin-top:0px; float:left; padding-left:5px; position:relative; } #box-within { float:right; font-weight:bold; max-height:250px; display: none; background-color:#fff; overflow: auto; width:325px; padding:5px; position:relative; }

    Read the article

  • Sorting in Lua, counting number of items

    - by Josh
    Two quick questions (I hope...) with the following code. The script below checks if a number is prime, and if not, returns all the factors for that number, otherwise it just returns that the number prime. Pay no attention to the zs. stuff in the script, for that is client specific and has no bearing on script functionality. The script itself works almost wonderfully, except for two minor details - the first being the factor list doesn't return itself sorted... that is, for 24, it'd return 1, 2, 12, 3, 8, 4, 6, and 24 instead of 1, 2, 3, 4, 6, 8, 12, and 24. I can't print it as a table, so it does need to be returned as a list. If it has to be sorted as a table first THEN turned into a list, I can deal with that. All that matters is the end result being the list. The other detail is that I need to check if there are only two numbers in the list or more. If there are only two numbers, it's a prime (1 and the number). The current way I have it does not work. Is there a way to accomplish this? I appreciate all the help! function get_all_factors(number) local factors = 1 for possible_factor=2, math.sqrt(number), 1 do local remainder = number%possible_factor if remainder == 0 then local factor, factor_pair = possible_factor, number/possible_factor factors = factors .. ", " .. factor if factor ~= factor_pair then factors = factors .. ", " .. factor_pair end end end factors = factors .. ", and " .. number return factors end local allfactors = get_all_factors(zs.param(1)) if zs.func.numitems(allfactors)==2 then return zs.param(1) .. " is prime." else return zs.param(1) .. " is not prime, and its factors are: " .. allfactors end

    Read the article

  • PHP – Slow String Manipulation

    - by Simon Roberts
    I have some very large data files and for business reasons I have to do extensive string manipulation (replacing characters and strings). This is unavoidable. The number of replacements runs into hundreds of thousands. It's taking longer than I would like. PHP is generally very quick but I'm doing so many of these string manipulations that it's slowing down and script execution is running into minutes. This is a pain because the script is run frequently. I've done some testing and found that str_replace is fastest, followed by strstr, followed by preg_replace. I've also tried individual str_replace statements as well as constructing arrays of patterns and replacements. I'm toying with the idea of isolating string manipulation operation and writing in a different language but I don't want to invest time in that option only to find that improvements are negligible. Plus, I only know Perl, PHP and COBOL so for any other language I would have to learn it first. I'm wondering how other people have approached similar problems? I have searched and I don't believe that this duplicates any existing questions.

    Read the article

  • Why is a DataViewManager not sorting?

    - by Alarion
    First off, a quick rundown of what the code is doing. I have two tables: Companies and Clients. There is a one to many relationship between Companies and Clients (one company can have many clients). In some processing logic, I have both tables loaded into a DataSet - each are DataTables. I have added a DataRelation to set the relationship, and it works when I use GetChildRows on a record from the Companies table. However, I need to sort the records returned. After googling around some it looks like the DataViewManager is the way to go, and I have examined several basic examples. However, I cannot get my rows sorted. Am I missing something, or am I not using this how it is supposed to be used? Example code: Dim ldvmManager As New DataViewManager(mdsData) ldvmManager.DataViewSettings("Companies").Sort = "comName ASC" ldvmManager.DataViewSettings("Clients").ApplyDefaultSort = False ldvmManager.DataViewSettings("Clients").Sort = "entLastName ASC, entFirstName ASC" For Each mdrCurrentCompany in ldvmManager.DataViewSettings("Companies").Table.Rows Dim ldrClients() as DataRow = mdrCurrentCompany.GetChildRows("CompaniesClients") Next No matter what I do, the first Client returned has a last name that starts with a 'B' and the second has one that starts with a 'A'. From there, the order is all mixed up. If I use my entire data instead of the subset I was using to test with, the first Client returned has a last name that starts with 'J'. It seems like it is still using whatever default sort was being used before I tried to use the DataViewManager. Any ideas?

    Read the article

  • Sync data between a windows desktop app and windows mobile client app

    - by Chris W
    I need to knock up a very quick prototype/proof of concept application to demo to someone within the next couple of days so I've minimal time to research this as fully as I normally would. The set-up is a very simple database application running on a laptop - will only ever be a single user updating a couple of tables so I was thinking of knocking up a basic Win Forms app against SQL Compact. Visual Studio's auto generated data grid edit screens will be fine with a little customisation. The second aspect is to then add a windows mobile client application that can pull data from both tables stored on the laptop, edit some data and insert some extra rows before sending the changes back to the laptop copy of the database. I've not done any WinMo development so what's the best approach for me to look at. Is it easy enough to sync data between the two databases when the WinMo device is connected to the laptop with USB? Most of the samples I've looked at so far seem to be syncing SQL Compact with SQL Standard using IIS which seems a bit overkill. The volumes of data to be synced are so small that I can easily write some manual sync code if it's easy for me to query/update the Compact DB from the laptop application when the device is connected.

    Read the article

  • How do I call Matlab in a script on Windows?

    - by Benjamin Oakes
    I'm working on a project that uses several languages: SQL for querying a database Perl/Ruby for quick-and-dirty processing of the data from the database and some other bookkeeping Matlab for matrix-oriented computations Various statistics languages (SAS/R/SPSS) for processing the Matlab output Each language fits its niche well and we already have a fair amount of code in each. Right now, there's a lot of manual work to run all these steps that would be much better scripted. I've already done this on Linux, and it works relatively well. On Linux: matlab -nosplash -nodesktop -r "command" or echo "command" | matlab -nosplash -nodesktop ...opens Matlab in a "command line" mode. (That is, no windows are created -- it just reads from STDIN, executes, and outputs to STDOUT/STDERR.) My problem is that on Windows (XP and 7), this same code opens up a window and doesn't read from / write to the command line. It just stares me blankly in the face, totally ignoring STDIN and STDOUT. How can I script running Matlab commands on Windows? I basically want something that will do: ruby database_query.rb perl legacy_code.pl ruby other_stuff.rb matlab processing_step_1.m matlab processing_step_2.m # etc, etc. I've found out that Matlab has an -automation flag on Windows to start an "automation server". That sounds like overkill for my purposes, and I'd like something that works on both platforms. What options do I have for automating Matlab in this workflow?

    Read the article

  • Anyone Experiencing Slow Builds With VS2010?

    - by MrKWatkins
    Hi, We've recently upgraded to the final release of VS2010 and are experiencing very slow build times compared to the same code under 2008. I was wondering if anyone else is experiencing the same so I can work out whether it's just our environment or not? A few details: Using VS2010 Ultimate on Windows 7 with fairly beefy machines, talking to TFS 2010. The solution has been upgraded from VS2008 but still builds against .NET 3.5 and ASP.NET MVC 1.0. It doesn't seem to be the compilation itself taking long but something else in the build process. This is because even projects that are up to date and don't need compiling are taking a few seconds or so to process. It's not due to an Visual Studio addin because a couple guys in the team haven't installed any. The first build after loading VS2010 is pretty quick, then they seem to slow down over time. For example on of the projects in my solution just took 00:00:00.08 to process after a restart. (The project was up to date and didn't need compiling) I then immediately hit rebuild and it jumps to 00:00:01.33. We're also experiencing the problem with another solution that uses .NET 4.0 that was building perfectly fine under VS2010 RC. There are no build events or anything like that I can blame, just straightforward assembly builds. The IDE is not very responsive during the slow builds. Anyone else has similar problems? Update: It looks like the resolving assembly references is taking a long time. Looking at the MSBuild diagnostic output or the example above the first build has 30ms for ResolveAssemblyReferences, the second build has 800ms. Subsequent builds seem to be taking longer copying stuff around, e.g. CopyFilesToOutputDirectory jumps from 1ms to 27ms.

    Read the article

  • Send email from webpage without postback/refresh

    - by jb
    Hi all. Long time reader, first time writer. Quick query i hope someone can help me with. I have a webpage (php, javascript) which has a small contact form for a user to fill in and email us. Not a problem in itself, i have a working form already. Nothing noteworthy, just a form posting to a php page which sends an email. <form id="myform" name="myform" method="post" action="Mailer.php"> So user fills in form, hits submit and the page changes to mailer.php and they find their way back to where they were. What I want instead is for the to stay on the same page when submit is pushed. The form div to update itself to just say 'message sent' or something. I just want to avoid a full page refresh. Much like how on say facebook, commenting on a status only updates that div. Hope that is phrased clearly enough. Cheers all,

    Read the article

  • PHP, MySQL, Memcache / Ajax Scaling Problem

    - by Jeff Andersen
    I'm building a ajax tic tac toe game in PHP/MySQL. The premise of the game is to be able to share a url like mygame.com/123 with your friends and you play multiple simultaneous games. The way I have it set up is that a file (reload.php) is being called every 3 seconds while the user is viewing their game board space. This reload.php builds their game boards and the output (html) replaces their current game board (thus showing games in which it is their turn) Initially I built it entirely with PHP/MySQL and had zero caching. A friend gave me a suggestion to try doing all of the temporary/quick read information through memcache (storing moves, and ID matchups) and then building the game boards from this information. My issue is that, both solutions encounter a wall when there is roughly 30-40 active users with roughly 40-50 games running. It is running on a VPS from VPS.net with 2 nodes. (Dedicated CPU: 1.2GHz, RAM: 752MB) Each call to reload.php peforms 3 selects and 2 insert queries. The size of the data being pulled is negligible. The same actions happen on index.php to build the boards for the initial visit. Now that the backstory is done, my question is: Would there be a bottleneck in that each user is polling the same file every 3 seconds to rebuild their gameboards, and that all users are sitting on index.php from which the AJAX calls are made within the HTML. If so, is it possible to spread the users' calls out over a set of files designated to building the game boards (eg. reload1.php 2, 3 etc) and direct users to the appropriate file. Would this relieve the pressure? A long winded explanation; however, I didn't have anywhere else to ask. Thanks very much for any insight.

    Read the article

  • Asynchronous Controller is blocking requests in ASP.NET MVC through jQuery

    - by Jason
    I have just started using the AsyncController in my project to take care of some long-running reports. Seemed ideal at the time since I could kick off the report and then perform a few other actions while waiting for it to come back and populate elements on the screen. My controller looks a bit like this. I tried to use a thread to perform the long task which I'd hoped would free up the controller to take more requests: public class ReportsController : AsyncController { public void LongRunningActionAsync() { AsyncManager.OutstandingOperations.Increment(); var newThread = new Thread(LongTask); newThread.Start(); } private void LongTask() { // Do something that takes a really long time //....... AsyncManager.OutstandingOperations.Decrement(); } public ActionResult LongRunningActionCompleted(string message) { // Set some data up on the view or something... return View(); } public JsonResult AnotherControllerAction() { // Do a quick task... return Json("..."); } } But what I am finding is that when I call LongRunningAction using the jQuery ajax request, any further requests I make after that back up behind it and are not processed until LongRunningAction completes. For example, call LongRunningAction which takes 10 seconds and then call AnotherControllerAction which is less than a second. AnotherControllerAction simply waits until LongRunningAction completes before returning a result. I've also checked the jQuery code, but this still happens if I specifically set "async: true": $.ajax({ async: true, type: "POST", url: "/Reports.aspx/LongRunningAction", dataType: "html", success: function(data, textStatus, XMLHttpRequest) { // ... }, error: function(XMLHttpRequest, textStatus, errorThrown) { // ... } }); At the moment I just have to assume that I'm using it incorrectly, but I'm hoping one of you guys can clear my mental block!

    Read the article

  • Effect.toggle and AJAX Calls

    - by kunalsawlani
    Hi, I am using the following code to capture all the AJAX calls being made from my app, in order to show a busy spinner till the call is completed. It works well when the requests are well spaced out. However, when the request are made in quick successions, some of the AJAX calls do not get registered, or the onCreate and onComplete actions get kicked off at the same time, and more often than not, the busy spinner continues to appear on the screen, after all the calls have successfully completed. Is there any check I can perform at the end of the call, to check if the element is visible, I can hide it. document.observe("dom:loaded", function() { $('loading').hide(); Ajax.Responders.register({ //When an Ajax call is made. onCreate: function() { new Effect.toggle('loading', 'appear'); new Effect.Opacity('display-area', { from: 1.0, to: 0.3, duration: 0.7 }); }, onComplete: function() { new Effect.toggle('loading', 'appear'); new Effect.Opacity('display-area', { from: 0.3, to: 1, duration: 0.7 }); } }); }); Thanks!

    Read the article

  • How to split this string and identify first sentence after last '*'?

    - by DaveDev
    I have to get a quick demo for a client, so this is a bit hacky. Please don't flame me too much! :-) I'm getting a string similar to the following back from the database: The object of the following is to do: * blah 1 * blah 2 * blah 3 * blah 4. Some more extremely uninteresting text. Followed by yet another sentence full of extrememly uninteresting text. Thankfully this is the last sentence. I need to format this so that each * represents a bullet point, and the sentence after the last * goes onto a new line, ideally as follows: The object of the following is to do: blah 1 (StackOverflow wants to add bullet points here, but I just need '*') blah 2 blah 3 blah 4. Some more extremely uninteresting text. Followed by yet another sentence full of extrememly uninteresting text. Thankfully this is the last sentence. It's easy enough to split the string by the * character and replace that with <br /> *. I'm using the following for that: string description = GetDescription(); description = description.Replace("*", "<br />*"); // it's going onto a web page. but the result this gives me is: The object of the following is to do: blah 1 blah 2 blah 3 blah 4. Some more extremely uninteresting text. Followed by yet another sentence full of extrememly uninteresting text. Thankfully this is the last sentence. I'm having a bit of difficulty identifying the fist sentence after the last '*' so I can put a break there too. Can somebody show me how to do this?

    Read the article

  • Mgmt wants to re-title my position: Any help...? [closed]

    - by JohnFlyTN
    Management here wants to re-title my position, since I'm doing quite a bit of different work than was originally planned. They want my input. After a quick glance over my skill set and job duties, what would we need to describe this position as? I'll just list things I'm at least proficient in, I will not list things I have a passing knowledge of. About me : ~10 years software development. Languages : C, C++, Perl, PHP, C#, TCL, Unix shell scripting, SQL (TSQL, PLSQL) Systems : MS-Dos, Windows 3.1 to 7 for client, NT 4 to 2008 for server, OS/2, IBM MVS & z/OS, Linux ( multiple distros), AIX Current position: I do all sorts of in-house software. The range is single user apps to large systems spanning multiple OS's. One of the larger projects I've designed and coded is about 100k lines of C#, and a database where I have been the sole designer and maintainer. I have near total freedom to design as I see fit, restraints are usually budgetary. Skills required to replace me in my current role: Windows and Unix admin, Database design, .NET up to 3.5 (C#, ASP.NET), C++, Perl, good skills in designing large and efficient data processing systems. Given this small level of information what would you see this as being titled? (is more information required to render a decision?)

    Read the article

  • Too many columns to index - use mySQL Partitions?

    - by Christopher Padfield
    We have an application with a table with 20+ columns that are all searchable. Building indexes for all these columns would make write queries very slow; and any really useful index would often have to be across multiple columns increasing the number of indexes needed. However, for 95% of these searches, only a small subset of those rows need to be searched upon, and quite a small number - say 50,000 rows. So, we have considered using mySQL Partition tables - having a column that is basically isActive which is what we divide the two partitions by. Most search queries would be run with isActive=1. Most queries would then be run against the small 50,000 row partition and be quick without other indexes. Only issue is the rows where isActive=1 is not fixed; i.e. it's not based on the date of the row or anything fixed like that; we will need to update isActive based on use of the data in that row. As I understand it that is no problem though; the data would just be moved from one partition to another during the UPDATE query. We do have a PK on id for the row though; and I am not sure if this is a problem; the manual seemed to suggest the partition had to be based on any primary keys. This would be a huge problem for us because the primary key ID has no basis on whether the row isActive.

    Read the article

  • Skip subdirectory in python import

    - by jstaab
    Ok, so I'm trying to change this: app/ - lib.py - models.py - blah.py Into this: app/ - __init__.py - lib.py - models/ - __init__.py - user.py - account.py - banana.py - blah.py And still be able to import my models using from app.models import User rather than having to change it to from app.models.user import User all over the place. Basically, I want everything to treat the package as a single module, but be able to navigate the code in separate files for development ease. The reason I can't do something like add for file in __all__: from file import * into init.py is I have circular references between the model files. A fix I don't want is to import those models from within the functions that use them. But that's super ugly. Let me give you an example: user.py ... from app.models import Banana ... banana.py ... from app.models import User ... I wrote a quick pre-processing script that grabs all the files, re-writes them to put imports at the top, and puts it into models.py, but that's hardly an improvement, since now my stack traces don't show the line number I actually need to change. Any ideas? I always though init was probably magical but now that I dig into it, I can't find anything that lets me provide myself this really simple convenience.

    Read the article

  • Combining two exe files

    - by Sophia
    Here's some background to my problem: I have a project in Visual C++ 2006 and a project in in Visual C++ 2010 Express. Both compiles to form an exe file each. I cannot convert my 2006 project to 2010 because I get a lot of "unable to load project" errors. I also cannot port my 2010 project code to 2006 (I always get errors no matter what I try, something to do with libraries). My final solution requires me to only have ONE executable. Is there anything I can do to achieve that? I've done some quick search on Google and found there to be exe joiners, but I've also heard that those things are often used to make malware. For reference, I am working with "dummy" clients, and therefore want to simplify things on their end as much as possible. Thus, having them executing one exe is better than having them execute two. Also, I do not wish for their antivirus to go haywire because I used some program to join two exe together. What do? Edit: The two project files do different things. For example, the project in VS2006 one sets up a server, and the project in VS2010 one grabs info on the user's OS. The code for the "server", I think, has a lot of dependencies and for some reason cannot convert to Visual c++ 2010. The code for "grabbing" requires some newer libraries and compiling options, and would not work if I port to 2006.

    Read the article

  • Testing Web Application: "Mirror" ad-hoc testing in another window

    - by Narcissus
    I don't even really know if the title is the best way to explain what I'm trying to do, but anyway... We have a web app that is being ported to a number of DB backends via MDB2. Our unit tests are pretty lacking at the moment, but our internal users are pretty good at knowing what to test to see if things are broken. What I'm 'imagining' is a browser plug in (don't really care which browser it is for) or a similar system that essentially takes every event from one window and 'mirrors' it in the other browser/s. The reason I'd like this is so that I can have various installations that use different DB backends, and have the user open a window/tab to each installation. From there, however, I'd like them to be able to 'work' in one window and have that 'work' I occur at the same time in each of the 'cloned' windows. From there, they should be able to do some quick eyeballing of the information that comes back, without having to worry about timing differences and so (very much). I know it's a big ask, but I figure if anyone knows of a solution, I'd find it here... Any thoughts?

    Read the article

  • How can i test my TSQL syntax?

    - by acidzombie24
    Quick question: How do i get some kind of database to use to test my sql syntax and create basic data. I have Sqlite Code which i'll soon put on a server. I have sql server 2008 installed with visual studio 2010. I tried connecting to the database and had no luck. I also tried using a .mdf file instead thinking its a file and i wont have connectivity issues. Wrong, i still couldnt connect and i used this site to help me (i'm aware its 2005) In that case i used var conn = new SqlConnection(@"Server=.\SQLExpress;AttachDbFilename=C:\dev\src\test\SQL_DB_VS_Test\test.mdf;Database=dbo;Trusted_Connection=Yes;"); exception Unable to open the physical file "C:\dev\src\test\SQL_DB_VS_Test\test.mdf". Operating system error 5: "5(Access is denied.)". Cannot attach the file 'C:\dev\src\test\SQL_DB_VS_Test\test.mdf' as database 'dbo'. with trusted = no i get Login failed for user ''. (What user am i suppose to set...). I created the .mdf with visual studio somehow.

    Read the article

  • How do I introspect on a SQL Server?

    - by MetaHyperBolic
    I have a server with a vendor application which is heavily database-reliant. I need to make some minor changes to the data in a few tables in the database in an automated fashion. Just INSERTs and UPDATEs, nothing fancy. Vendors being vendors, I can never be quite sure when they change the schema of a database during upgrade. To that end, how do I ask the SQL server, in some scriptable fashion, "Hey, does this table still exist? Yeah, cool, okay, but does it have this column? What's the data type and size on that? Is it nullable? Could you give me a list of tables? In this table, could you give me a list of columns? Any primary keys there?" I do not need to do this for the whole schema, only part of it, just a quick check of the database before I launch into things. We have Microsoft SQL Server 2005 on it currently, but it might easily move to Microsoft SQL Server 2008. I am probably not using the correct terminology when searching. I do know that ORM is not only too much overhead for this sort of thing, but also that I have no chance of pitching it to my coworkers.

    Read the article

  • What is the fastest (to access) struct-like object in Python?

    - by DNS
    I'm optimizing some code whose main bottleneck is running through and accessing a very large list of struct-like objects. Currently I'm using namedtuples, for readability. But some quick benchmarking using 'timeit' shows that this is really the wrong way to go where performance is a factor: Named tuple with a, b, c: >>> timeit("z = a.c", "from __main__ import a") 0.38655471766332994 Class using __slots__, with a, b, c: >>> timeit("z = b.c", "from __main__ import b") 0.14527461047146062 Dictionary with keys a, b, c: >>> timeit("z = c['c']", "from __main__ import c") 0.11588272541098377 Tuple with three values, using a constant key: >>> timeit("z = d[2]", "from __main__ import d") 0.11106188992948773 List with three values, using a constant key: >>> timeit("z = e[2]", "from __main__ import e") 0.086038238242508669 Tuple with three values, using a local key: >>> timeit("z = d[key]", "from __main__ import d, key") 0.11187358437882722 List with three values, using a local key: >>> timeit("z = e[key]", "from __main__ import e, key") 0.088604143037173344 First of all, is there anything about these little timeit tests that would render them invalid? I ran each several times, to make sure no random system event had thrown them off, and the results were almost identical. It would appear that dictionaries offer the best balance between performance and readability, with classes coming in second. This is unfortunate, since, for my purposes, I also need the object to be sequence-like; hence my choice of namedtuple. Lists are substantially faster, but constant keys are unmaintainable; I'd have to create a bunch of index-constants, i.e. KEY_1 = 1, KEY_2 = 2, etc. which is also not ideal. Am I stuck with these choices, or is there an alternative that I've missed?

    Read the article

  • Need to sort 3 arrays by one key array

    - by jeff6461
    I am trying to get 3 arrays sorted by one key array in objective c for the iphone, here is a example to help out... Array 1 Array 2 Array 3 Array 4 1 15 21 7 3 12 8 9 6 7 8 0 2 3 4 8 When sorted i want this to look like Array 1 Array 2 Array 3 Array 4 1 15 21 7 2 3 4 8 3 12 8 9 6 7 8 0 So array 2,3,4 are moving with Array 1 when sorted. Currently i am using a bubble sort to do this but it lags so bad that it crashes by app. The code i am using to do this is int flag = 0; int i = 0; int temp = 0; do { flag=1; for(i = 0; i < distancenumber; i++) { if(distance[i] > distance[i+1]) { temp = distance[i]; distance[i]=distance[i + 1]; distance[i + 1]=temp; temp = FlowerarrayNumber[i]; FlowerarrayNumber[i] = FlowerarrayNumber[i+1]; FlowerarrayNumber[i + 1] = temp; temp = BeearrayNumber[i]; BeearrayNumber[i] = BeearrayNumber[i + 1]; BeearrayNumber[i + 1] = temp; flag=0; } } }while (flag==0); where distance number is the amount of elements in all of the arrays, distance is array 1 or my key array. and the other 2 are getting sorted. If anyone can help me get a merge sort(or something faster, it is running on a iPhone so it needs to be quick and light) to do this that would be great i cannot figure out how the recursion works in this method and so having a hard time to get the code to work. Any help would be greatly appreciated

    Read the article

  • Polling a web URL until event

    - by Jaxo
    I'm really sorry about the crappy title - if anybody has a better way of wording it, please edit it! I basically need to have a C# application run a function if the output of a URL is a certain value. For example, if the website says blue the background colour will be blue, red to make it red, etc. The problem is I don't want to spam my webserver with checks. The 4 bytes it downloads each time is negligible, but if I were to deploy this type of system on multiple computers, it would get slower and slower and the bandwidth would add up quickly. So my question is: How can my desktop application run a piece of code only when a web URL has a different output without checking each time? I can't use sockets, and any sort of LAN protocol won't end up working. My reasoning behind this potentially nefarious code is to be able to mute computers by updating a file on the website (as you may have seen in my previous question today, sorry!). I'd like it to be rather quick, and not have the refresh time minutes apart, a few seconds at the most would be ideal. How can I accomplish this? The website's code is easy, but getting the C# application to check when it changes is the part I'm stuck on. Nothing shows up on the website other than the command. Thanks!

    Read the article

  • Setting up a NAS with Citrix XenServer

    - by JasonBrown
    Just a quick query on anyone whos worked with XenServer, I want to setup a NAS at home but with virtualization (I've looked into VMWare Server and KVM, I quite like KVM!) but I was told about XenServer 5.5. I have comomodity hardware (ASUS board, dual core 2.66Ghz CPU with 8Gb RAM), I need to setup a fileserver to house about 2-3Tb worth of data (big chunky video - not porn!). Need to run Linux (preferably CentOS) but also run Windows virtualised for testing. I was thinking of going the XenServer route, however I want to be able to offer a VM access to the 2-3Tb of HDDs (5 HDD drives) directly so it can do its thing (maybe using FreeNAS). Would this be possible with XenServer? Or will I have to do more work - and another box - to offer this? My goals are to use FreeNAS (ZFS!) for the filesserver, CentOS for SVN and aother bits we need to use (LAMP Stack), Windows for our win32 testing all on one box. I see this iSCSI target bits and get scared.

    Read the article

  • Consecutive calls/evaulations in a form?

    - by Dave
    Hey guys, simple question... Working with XLISP to write a program, but I've seemed to run into a simple fundamental problem that I can't seem to work around: perhaps someone has a quick fix. I'm trying to write an if statement who's then-clause evaluates multiple forms and returns the value of the last. In example: (setq POSITION 'DINING-ROOM) (defun LOOK (DIRECTION ROOM) ... ) (defun SETPOS (ROOM) ... ) (defun WHERE () ... ) (defun MOVE (DIRECTION) (if (not(equal nil (LOOK DIRECTION POSITION))) ; If there is a room in that direction ( ; Then-block: Go to that room. Return where you are. (SETPOS (LOOK DIRECTION ROOM)) (WHERE) ) ( ; Else-block: Return error (list 'CANT 'GO 'THERE) ) ) The logical equivalent intended is: function Move (Direction) { if(Look(Direction, Room) != null) { SetPos(Look(Direction,Room)); return Where(); } else { return "Can't go there"; } } (Apologies for the poor web-formatting.) The problem I have is with: ( (SETPOS (LOOK DIRECTION ROOM)) (WHERE) ) I simply want to return the evaluation of WHERE, but I need to execute the SETPOS function first. XLISP doesn't like the extra parentheses: if I remove the outer set, my WHERE list becomes my else (I don't want that). If I remove the sets around SETPOS and WHERE, it treats WHERE like an argument for SETPOS; I also don't want that. So, how do I simply evaluate the first, then the second and then return the values of the last evaluated?

    Read the article

  • Delete on a very deep tree

    - by Kathoz
    I am building a suffix trie (unfortunately, no time to properly implement a suffix tree) for a 10 character set. The strings I wish to parse are going to be rather long (up to 1M characters). The tree is constructed without any problems, however, I run into some when I try to free the memory after being done with it. In particularly, if I set up my constructor and destructor to be as such (where CNode.child is a pointer to an array of 10 pointers to other CNodes, and count is a simple unsigned int): CNode::CNode(){ count = 0; child = new CNode* [10]; memset(child, 0, sizeof(CNode*) * 10); } CNode::~CNode(){ for (int i=0; i<10; i++) delete child[i]; } I get a stack overflow when trying to delete the root node. I might be wrong, but I am fairly certain that this is due to too many destructor calls (each destructor calls up to 10 other destructors). I know this is suboptimal both space, and time-wise, however, this is supposed to be a quick-and-dirty solution to a the repeated substring problem. tl;dr: how would one go about freeing the memory occupied by a very deep tree? Thank you for your time.

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >