Search Results

Search found 21501 results on 861 pages for 'slow connection'.

Page 408/861 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • MyTop doesn't display executed queries

    - by recluze
    Hi, I'm trying to use mytop for figuring out what queries are being executed and how long they take. I can connect properly to the db but MySQL on localhost (5.1.41-3ubuntu12.8) up 0+01:50:47 [09:30:43] Queries: 3.0 qps: 0 Slow: 0.0 Se/In/Up/De(%): 68767/00/00/00 Key Efficiency: 99.1% Bps in/out: 0.0/ 1.2 Id User Host/IP DB Time Cmd Query or State -- ---- ------- -- ---- --- -------------- 225 root localhost 0 Query show full processlist 186 joom localhost culinary 5684 Sleep The number of queries increases over time but the queries themselves do not appear in the list. Is there some type of configuration that I need to do to enable it?

    Read the article

  • How to determine the size (in bytes) of a file downloading using NSURLConnection?

    - by RexOnRoids
    I need to know the size of the file I am downloading (in bytes) into my app using NSURLConnection (GET). Here is my bytes recieved code below if it helps. What I need to know is how to get the filesize in bytes so that I can use it to show a UIProgressView. - (void)connection:(NSURLConnection *)theConnection didReceiveData:(NSData *)data // A delegate method called by the NSURLConnection as data arrives. We just // write the data to the file. { #pragma unused(theConnection) NSInteger dataLength; const uint8_t * dataBytes; NSInteger bytesWritten; NSInteger bytesWrittenSoFar; assert(theConnection == self.connection); dataLength = [data length]; dataBytes = [data bytes]; bytesWrittenSoFar = 0; do { bytesWritten = [self.fileStream write:&dataBytes[bytesWrittenSoFar] maxLength:dataLength - bytesWrittenSoFar]; assert(bytesWritten != 0); if (bytesWritten == -1) { [self _stopReceiveWithStatus:@"File write error"]; break; } else { bytesWrittenSoFar += bytesWritten; } while (bytesWrittenSoFar != dataLength); }

    Read the article

  • MTU, DSL router and stalling TCP

    - by user38843
    I am discovering stalling TCP connection problem. The problem arises when I try to scp stuff from remote system from my home network. My home network is connected to internet via PPPoE (ADSL+) and everything works perfectly once working from my home network. The ADSL router has MTU set to 1492 but with that setting the scp from remote system does not work - stalling! When I change the MTU on my router to 1500 the that scp works perfectly but internet accesses from my home network is very slow to most of the www sites - even local ones. Just wondering where the problem exists - my ISP blocking ICMP, etc? Thanks!

    Read the article

  • Home ZFS based NAS...What processor/chipset to use?

    - by MrBlargityBlarg
    So, I'm building a home/personal NAS. My plan is to expose both SMB fileshares for sharing files/media between hosts, but also to carve an iSCSI target LUN out of it for use by VMWare as a datastore. I want to use ZFS (software RAID) so that means I'll either be using FreeNAS, Solaris Express, or OpenIndiana. My question is basically: How much horsepower do I need? Obviously I/O is going to be my bottleneck but I want to be sure that I am not limiting my I/O because of a slow processor or chipset. So far the hardware plan is to use an Intel i3 and motherboard with one of the H87, Q87, or Z87 chipsets, a SAS controller (JBOD, no RAID) and if budget allows, I'm also hoping to get an SSD for the ZFS L2ARC and ZIL. Does anyone think I could get away with an Intel Atom or cheaper/less-capable processor/chipset than the i3 and [HQZ]87 listed above?

    Read the article

  • C# wrapper and Callbacks

    - by fergs
    I'm in the process of writing a C# wrapper for Dallmeier Common API light (Camera & Surviellance systems) and I've never written a wrapper before, but I have used the Canon EDSDK C# wrapper. So I'm using the Canon wrapper as a guide to writing the Dallmeier wrapper. I'm currently having issues with wrapping a callback. In the API manual it has the following: dlm_connect int(unsigned long uLWindowHandle, const char * strIP, const char* strUser1, const char* strPwd1, const char* strUser2, const char* strPwd2, void (*callback)(void *pParameters, void *pResult, void *pInput), void * pInput) Arguments - ulWindowhandle - handle of the window that is passed to the ViewerSDK to display video and messages there - strUser1/2 - names of the users to log in. If only single user login is used strUser2 is - NULL - strPwd1/2 - passwords of both users. If strUser2 is NULL strPwd2 is ignored. Return This function creates a SessionHandle that has to be passed Callback pParameters will be structured: - unsigned long ulFunctionID - unsigned long ulSocketHandle, //handle to socket of the established connection - unsigned long ulWindowHandle, - int SessionHandle, //session handle of the session created - const char * strIP, - const char* strUser1, - const char* strPwd1, - const char* strUser2, - const char * strPWD2 pResult is a pointer to an integer, representing the result of the operation. Zero on success. Negative values are error codes. So from what I've read on the Net and Stack Overflow - C# uses delegates for the purpose of callbacks. So I create a my Callback function : public delegate uint DallmeierCallback(DallPparameters pParameters, IntPtr pResult, IntPtr pInput); I create the connection function [DllImport("davidapidis.dll")] public extern static int dlm_connect(ulong ulWindowHandle, string strIP, string strUser1, string strPwd1, string strUser2, string strPwd2, DallmeierCallback inDallmeierFunc And (I think) the DallPParameters as a struct : [StructLayout(LayoutKind.Sequential)] public struct DallPParameters { public ulong ulfunctionID; public ulong ulsocketHandle; public ulong ulWindowHandle; ... } All of this is in my wrapper class. Am I heading in the right direction or is this completely wrong?

    Read the article

  • Temporarily Utilizing 304 Header on Apache for Crawlers

    - by Volomike
    I have a client who has a hosting arrangement with 400 customer sites all hosted through SuPHP in CGI mode on Apache. The sysop is now gone and the client is calling on me for rolling out a new PHP thing. Trouble is -- server load is very high right now and we have found that it's due to the crawlers. We had one customer in particular who complained of slow websites, and we engaged a 304 header plugin in his site against most crawlers, and his site perked right up. We'd like to lower that load by issuing a global 304 header to all the crawlers, letting human visitors through. I have a long list of user agent keywords to trap for. What's the best way to temporarily engage that global 304 header, while allowing human visitors to get right on through? I mean, I could roll out 400 .htaccess file changes, but it would be ideal to make this change in like one central Apache config and then it automatically affect all the sites at once.

    Read the article

  • iPhone JSON object releasing itself?

    - by MidnightLightning
    I'm using the JSON Framework addon for iPhone's Objective-C to catch a JSON object that's an array of Dictionary-style objects via HTTP. Here's my connectionDidFinishLoading function: - (void)connectionDidFinishLoading:(NSURLConnection *)connection { [connection release]; NSString *responseString = [[NSString alloc] initWithData:responseData encoding:NSUTF8StringEncoding]; [loadingIndicator stopAnimating]; NSArray *responseArray = [responseString JSONValue]; // Grab the JSON array of dictionaries NSLog(@"Response Array: %@", responseArray); if ([responseArray respondsToSelector:@selector(count)]) { NSLog(@"Returned %@ items", [responseArray count]); } [responseArray release]; [responseString release]; } The issue is that the code is throwing a EXC_BAD_ACCESS error on the second NSLog line. The EXC_BAD_ACCESS error I think indicates that the variable got released from memory, but the first NSLog command works just fine (and shows that the data is all there); it seems that only when calling the count message is causing the error, but the respondsToSelector call at least thinks that the responseArray should be able to respond to that message. When running with the debugger, it crashes on that second line, but the stack shows that the responseArray object is still defined, and has 12 objects in it (so the debugger at least is able to get an accurate count of the contents of that variable). Is this a problem with the JSON framework's creation of that NSArray, or is there something wrong with my code?

    Read the article

  • How do you recommend installing Linux on a computer that has no external drive or ability to boot fr

    - by 7777
    I have an old Toshiba Portege 3505 "ultralight" laptop, meaning it doesn't have any kind of disk drive on it at all, that I'd like to completely reformat and install Linux on. However, it won't boot from any drive (and I don't have any on hand), so I'll have to install it from a USB drive (which I doubt it boots from either). (I'm not sure how to change the settings in my BIOS to get my computer to boot from a USB stick. Any ideas for this?) How do you recommend I do this? I want to note that I don't want to run Linux off a LiveUSB, I want to actually install it on the machine. I was thinking about Damn Small Linux, it's tiny and all I need. Any advice or suggestions for something else though? Finally, I'm a total newbie to this, I've never installed Linux on anything before so I might be a little slow on some stuff! Thanks!

    Read the article

  • How do you create large, growable, shared filesystems on Linux at AWS?

    - by Reece
    What are acceptable/reasonable/best ways to provide large, growable, shared storage at AWS, exposed as a single filesystem? We're currently making 1TB EBS volumes ~biweekly and NFS exporting with no_subtree_check and nohide. In this setup, distinct exports appear under a single mount on the client. This arrangement does not scale well. The options we've considered: LVM2 with ext4. resize2fs is too slow. Btrfs on Linux. not obviously ready for prime time yet. ZFS on Linux. not obviously ready for prime time yet (although LLNL uses it) ZFS on Solaris. future of this combo is uncertain (to me), and new OS in the mix glusterfs. heard mostly good but two scary (and maybe old?) stories. The ideal solution would provide sharing, a single fs view, easy expandability, snapshots, and replication. Thanks for sharing ideas and experience.

    Read the article

  • NAS + XBMC combination

    - by Hendrik
    I currently own a WD Sharespace and I am not too happy about it. It will freeze constantly and its rather slow. I've been thinking of replacing it. I also own a WD HD Live, and it works like it is supposed to, but recently I've seen XBMC in action and I was quite impressed. I've been wondering lately about the possibility of replacing both products with one new device: a fast reliable nas which I can install XBMC on and is capable of rendering video and has an HDMI out. Do any such devices exist for sale or is the only way to build one from scratch? I've seen HTPC's for sale but I've never seen any of them with a raid setup. I've also read about building your own NAS with freeNAS but those setups dont allow you to run XBMC. Does anyone have experience with this?

    Read the article

  • Linux's best filesystem to work with 10000's of files without overloading the system I/O

    - by mhambra
    Hi all. It is known that certain AMD64 Linuxes are subject of being unresponsive under heavy disk I/O (see Gentoo forums: AMD64 system slow/unresponsive during disk access (Part 2)), unfortunately have such one. I want to put /var/tmp/portage and /usr/portage trees to a separate partition, but what FS to choose for it? Requirements: * for journaling, performance is preffered over safe data read/write operations * optimized to read/write 10000 of small files Candidates: * ext2 without any journaling * BtrFS In Phoronix tests, BtrFS had demonstrated a good random access performance (fat better than XFS thereby it may be less CPU-aggressive). However, unpacking operation seems to be faster with XFS there, but it was tested that unpacking kernel tree to XFS makes my system to react slower for 51% disregard of any renice'd processes and/or schedulers. Why no ReiserFS? Google'd this (q: reiserfs ext2 cpu): 1 Apr 2006 ... Surprisingly, the ReiserFS and the XFS used significantly more CPU to remove file tree (86% and 65%) when other FS used about 15% (Ext3 and ... Is it same now?

    Read the article

  • Pass table as parameter to SQLCLR TV-UDF

    - by Skeolan
    We have a third-party DLL that can operate on a DataTable of source information and generate some useful values, and we're trying to hook it up through SQLCLR to be callable as a table-valued UDF in SQL Server 2008. Taking the concept here one step further, I would like to program a CLR Table-Valued Function that operates on a table of source data from the DB. I'm pretty sure I understand what needs to happen on the T-SQL side of things; but, what should the method signature look like in the .NET (C#) code? What would be the parameter datatype for "table data from SQL Server?" e.g. /* Setup */ CREATE TYPE InTableType AS TABLE (LocationName VARCHAR(50), Lat FLOAT, Lon FLOAT) GO CREATE TYPE OutTableType AS TABLE (LocationName VARCHAR(50), NeighborName VARCHAR(50), Distance FLOAT) GO CREATE ASSEMBLY myCLRAssembly FROM 'D:\assemblies\myCLR_UDFs.dll' WITH PERMISSION_SET = EXTERNAL_ACCESS GO CREATE FUNCTION GetDistances(@locations InTableType) RETURNS OutTableType AS EXTERNAL NAME myCLRAssembly.GeoDistance.SQLCLRInitMethod GO /* Execution */ DECLARE @myTable InTableType INSERT INTO @myTable(LocationName, Lat, Lon) VALUES('aaa', -50.0, -20.0) INSERT INTO @myTable(LocationName, Lat, Lon) VALUES('bbb', -20.0, -50.0) SELECT * FROM @myTable DECLARE @myResult OutTableType INSERT INTO @myResult MyCLRTVFunction @myTable --returns a table result calculated using the input The lat/lon - distance thing is a silly example that should of course be better handled entirely in SQL; but I hope it illustrates the general intent of table-in - table-out through a table-valued UDF tied to a SQLCLR assembly. I am not certain this is possible; what would the SQLCLRInitMethod method signature look like in the C#? public class GeoDistance { [SqlFunction(FillRowMethodName = "FillRow")] public static IEnumerable SQLCLRInitMethod(<appropriateType> myInputData) { //... } public static void FillRow(...) { //... } } If it's not possible, I know I can use a "context connection=true" SQL connection within the C# code to have the CLR component query for the necessary data given the relevant keys; but that's sensitive to changes in the DB schema. So I hope to just have SQL bundle up all the source data and pass it to the function. Bonus question - assuming this works at all, would it also work with more than one input table?

    Read the article

  • virt-viewer slower than virt-manager when viewing

    - by map7
    I've got a thin client server in which I have a few VM's for users under KVM which I manage through virt-manager. What I've noticed is if I start a VM guest on a thin client using the command 'virt-viewer ' then the guest is painfully slow to move around. However if on the same thin client I start the same guest VM through virt-manager it's fast. What are the differences here? Can I start a VM without having the user load up virt-manager and double click on their VM? Should I be looking at using splice in virt-viewer instead of VNC which is what I currently use?

    Read the article

  • OleDb database to DataSet and back in c#?

    - by Troy
    I'm writing a program that lets a user: Connect to an (arbitrary) View all of the tables in that database in separate DataGridViews Edit them in the program, generate random data, and see the results Choose to commit those changes or revert So I discovered the DataSet class, which looks like it's capable of holding everything that a database would, and I decided that the best thing to do here would be to load everything into one dataset, let the user edit it, and then save the dataset back to the database. The problem is that the only way I can find to load the database tables is this: set = new DataSet(); DataTable schema = connection.GetOleDbSchemaTable( OleDbSchemaGuid.Tables, new string[] { null, null, null, "TABLE" }); foreach (DataRow row in schema.Rows) { string tableName = row.Field<string>("TABLE_NAME"); DataTable dTable = new DataTable(); new OleDbDataAdapter("SELECT * FROM " + tableName, connection).Fill(dTable); dTable.TableName = tableName; set.Tables.Add(dTable); } while it seems like there should be a simpler way given that datasets appear to be designed for exactly this purpose. The real problem though is when I try saving these things. In order to use the OleDbDataAdapter.Update() method, I'm told that I have to provide valid INSERT queries. Doesn't that kind of negate the whole point of having a class to handle this stuff for me? Anyway, I'm hoping somebody can either explain how to load and save a database into a dataset or maybe give me a better idea of how to do what I'm trying to do. I could always parse the commands together myself, but that doesn't seem like the best solution.

    Read the article

  • XAMPP: Access Forbidden!

    - by Yar
    I just installed a fresh XAMPP on OSX. Apache runs and I can see the splash page. I open the httpd.conf and I set both places that point to htdocs to someplace else, which results in Apache showing an "Access Forbidden!" message. I plugged my directory here: <Directory "/Applications/XAMPP/xamppfiles/htdocs"> and here: DocumentRoot "/Applications/XAMPP/xamppfiles/htdocs" I have set the permissions to 777 for everything including the enclosing directory, but to no avail. Strangely, I just did this whole thing with MAMP and had no problems serving that directory, but it was slow.

    Read the article

  • Twitter-OAuth update_profile_*_image methods problem [EpiTwitter]

    - by KPL
    People, I have been struggling over the two methods - Update Profile Image and Update Background Image I am using EpiTwitter library. I am uploading GIFs, Twitter returns the expected result for update_profile_background_image but returns 401 for update_profile_image , but the image is not changed. Here are the headers catched from $apiObj-headers in my case while using the update_profile_background_image Array ( [Date] = Sat, 24 Apr 2010 17:51:36 GMT [Server] = hi [Status] = 200 OK [X-Transaction] = 1272131495-55190-23911 [ETag] = b6a421c01936f3547802ae6b59ee7ef3" [Last-Modified] = Sat, 24 Apr 2010 17:51:36 GMT [X-Runtime] = 0.13990 [Content-Type] = application/json; charset=utf-8 [Content-Length] = 1272 [Pragma] = no-cache [X-Revision] = DEV [Expires] = Tue, 31 Mar 1981 05:00:00 GMT [Cache-Control] = no-cache, no-store, must-revalidate, pre- check=0, post-check=0 [Set-Cookie] = *REMOVED* [Vary] = Accept-Encoding [Connection] = close ) and for update_profile_image - Array ( [Date] => Sat, 24 Apr 2010 17:57:58 GMT [Server] => hi [Status] => 401 Unauthorized [WWW-Authenticate] => Basic realm="Twitter API" [X-Runtime] => 0.02263 [Content-Type] => text/html; charset=utf-8 [Content-Length] => 152 [Cache-Control] => no-cache, max-age=300 [Set-Cookie] => *REMOVED* [Expires] => Sat, 24 Apr 2010 18:02:58 GMT [Vary] => Accept-Encoding [Connection] => close ) Can somebody help me out?

    Read the article

  • Performance problem with Win Server 2008 at vbox on Ubuntu 9.10 Server

    - by Diskilla
    Hey, i´ve got an Ubuntu 9.10 Server and tried to install Windows Server 2008 in a virtual machine. The Problem is, the Server has got 8 cores, but the virtual machine seems to have problems with that. The VM is very slow and not really usable if i set the VM-config to more than one core. Is it set to only one core everything works fine, but thats not a solution because I bought the Server with several cores for a reason... Does anybody know this problem or has got a solution? I feel like I read the entire internet via Google... no solution found. Greetz Diskilla P.S.: I´m from Germany and it´s kind of hard to ask in english :-) I hope I din´t make that much mistakes.

    Read the article

  • MacPorts, Fink, etc.

    - by Ben Alpert
    On my Mac OS X (10.5, soon to be 10.6) machine, how would you recommend I install various software that's been ported from Linux? I don't install such software very frequently, but I've been using MacPorts and it always seems quite slow, presumably because it has to compile the packages on-the-fly. I'd much prefer a package management system that has binary packages, saving me the need to compile things every time I want to download something new. I think Fink has binaries for some of the packages, but I usually see MacPorts recommended as the system to use. Which do you think works better and why? (Or is there another system that I haven't heard of?)

    Read the article

  • How to optimize this JSON/JQuery/Javascript function in IE7/IE8?

    - by melaos
    hi guys, i'm using this function to parse this json data but i find the function to be really slow in IE7 and slightly slow in IE8. basically the first listbox generate the main product list, and upon selection of the main list, it will populate the second list. this is my data: [{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":15913,"ProductName":"Creative Xmod","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":15913,"ProductName":"Creative Xmod","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":18094,"ProductName":"Sound Blaster Wireless Receiver","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":16185,"ProductName":"Xdock Wireless","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":16186,"ProductName":"Xmod Wireless","ProductServiceLifeId":1}] and these are the functions that i'm using: //Three Product Panes function function populateMainPane() { $.getJSON('/Home/ThreePaneProductData/', function(data) { products = data; alert(JSON.stringify(products)); var prodCategory = {}; for (i = 0; i < products.length; i++) { prodCategory[products[i].ProductCategoryId] = products[i].ProductCategoryName; } //end for //take only unique product category to be used var id = 0; for (id in prodCategory) { if (prodCategory.hasOwnProperty(id)) { $(".LBox1").append("<option value='" + id + "'>" + prodCategory[id] + "</option>"); //alert(prodCategory[id]); } } var url = document.location.href; var parms = url.substring(url.indexOf("?") + 1).split("&"); for (var i = 0; i < parms.length; i++) { var parm = parms[i].split("="); if (parm[0].toLowerCase() == "pid") { $(".PanelProductReg").show(); var nProductIds = parm[1].split(","); for (var k = 0; k < nProductIds.length; k++) { var nProductId = parseInt(nProductIds[k], 10); for (var j = 0; j < products.length; j++) { if (nProductId == parseInt(products[j].ProductId, 10)) { addProductRow(nProductId, products[j].ProductName); j = products.length; } } //end for } } } }); } //end function function populateSubCategoryPane() { var subCategory = {}; for (var i = 0; i < products.length; i++) { if (products[i].ProductCategoryId == $('.LBox1').val()) subCategory[products[i].ProductSubCategoryId] = products[i].ProductSubCategoryName; } //end for //clear off the list box first $(".LBox2").html(""); var id = 0; for (id in subCategory) { if (subCategory.hasOwnProperty(id)) { $(".LBox2").append("<option value='" + id + "'>" + subCategory[id] + "</option>"); //alert(prodCategory[id]); } } } //end function is there anything i can do to optimize this or is this a known browser issue?

    Read the article

  • Boto - How to delete a record set from route53 -Tried to delete resource record set but it was not found

    - by Tampa
    I am using the following to delete route53 records. I get no error messages. conn = Route53Connection(aws_access_key_id, aws_secret_access_key) changes = ResourceRecordSets(conn, zone_id) change = changes.add_change("DELETE",sub_domain, "A", 60,weight=weight,identifier=identifier) change.add_value(ip_old) changes.commit() all required fields are present and they match..weight, identifier, ttl=60 etc.\ e.g. test.com. A 111.111.111.111 60 1 id1 test.com. A 111.111.111.222 60 1 id2 I want to delete 111.111.111.222 and the record set. So, what is the proper way to delete a record set? For a record set, I will have multiple values that are distinguished by a unique identifier. When an ip address becomes in active I want to remove from route53. I am using a a poor mans load balancing. Here is the meta of the record want to delete. {'alias_dns_name': None, 'alias_hosted_zone_id': None, 'identifier': u'15754-1', 'name': u'hui.com.', 'resource_records': [u'103.4.xxx.xxx'], 'ttl': u'60', 'type': u'A', 'weight': u'1'} Traceback (most recent call last): File "/home/ubuntu/workspace/rtbopsConfig/classes/redis_ha.py", line 353, in <module> deleteRedisSubDomains(aws_access_key_id, aws_secret_access_key,platform=platform,sub_domain=sub_domain,redis_domain=redis_domain,zone_id=zone_id,ip_address=ip_address,weight=1,identifier=identifier) File "/home/ubuntu/workspace/rtbopsConfig/classes/redis_ha.py", line 341, in deleteRedisSubDomains changes.commit() File "/usr/local/lib/python2.7/dist-packages/boto-2.3.0-py2.7.egg/boto/route53/record.py", line 131, in commit return self.connection.change_rrsets(self.hosted_zone_id, self.to_xml()) File "/usr/local/lib/python2.7/dist-packages/boto-2.3.0-py2.7.egg/boto/route53/connection.py", line 291, in change_rrsets body) boto.route53.exception.DNSServerError: DNSServerError: 400 Bad Request <?xml version="1.0"?> <ErrorResponse xmlns="https://route53.amazonaws.com/doc/2011-05-05/"><Error><Type>Sender</Type><Code>InvalidChangeBatch</Code><Message>Tried to delete resource record set hui.com., type A, SetIdentifier 15754-1 but it was not found</Message></Error><RequestId>9972af89-cb69-11e1-803b-7bde5b9c457d</RequestId></ErrorResponse> Thanks

    Read the article

  • Can I be a wireless server WITHOUT offering internet?

    - by Kenny Hendrick
    I would like to pull into a truck stop and offer a folder of free documentaries to quell some of the ignorance LOL I run Linux ";-) Very happy to have made that switch by the way...and have an internal wireless in the laptop but my bus has an antenna on the roof which I connect via usb to create a wireless signal (this is the one I'd like to focus on since it will offer the most reach to all those truckers out there. My question is this. I'm running apache2 and have a bunch of videos tossed into the www folder and can access it locally and would like others that frown on paying to use the truckstops slow internet service and would like an alternative by simply logging onto me and getting the narrow content I offer freely without passwords and the like. Has anyone a good means of doing this? p.s. I've done this in the past but am getting old and forgetful (my crutch)

    Read the article

  • Invoking ADO.NET from MATLAB

    - by Andrew Shepherd
    It's possible to call .NET from MATLAB, so I thought I would try to use ADO.NET to connect to a database. I seem to have hit a blocking problem - anytime you try and create a Command object, it throws an error. You can try this yourself: >> NET.addAssembly('System.Data'); >> sqlconn = System.Data.SqlClient.SqlConnection(); >> sqlconn.State ans = Closed >> % So far, so good >> sqlcmd = System.Data.SqlClient.SqlCommand(); ??? Error using ==> System.Data.SqlClient.SqlCommand 'Connection' is already defined as a property. >> Does anyone have some insight into this? It seems like a pure and simple bug on MATLAB's part - maybe it happens with every .NET class that happens to have a property called "Connection". Should I just throw in the towel and give up on using MATLAB to talk to a database using .NET?

    Read the article

  • Where should executables be run from? Network share or client?

    - by user74150
    We have an executable that is used by 50+ client machines on a network and upgraded regularly. Is it acceptable to put the executable on a network share and have the client machines run it from there via a shortcut on their desktop? That way when we upgrade the .exe we can simply replace the one file with a new one and all clients will be accessing the new one. How will a slow or unstable network handle this? If this is not acceptable, what would be the best way to keep all clients updated with the latest .exe?

    Read the article

  • Can't avoid starting macbook in safe mode

    - by Aaron Brown
    I recently spilled some water on my MacBook (mid-2010) keyboard and it shorted out several of the keys. Notably, control and left option don't work, and the system thinks that the left shift is permanently held down. I plugged in an external USB keyboard and all keys work fine; there's only one problem: The computer always starts in safe mode because the shift key is held down. I've tried holding down other keys (escape, space, c to name a few) and the control key doesn't work so I can't try that. I also tried KeyRemap4Macbook but it doesn't work in safe mode and it doesn't seem to help on startup for me. I can log in to Windows with no problems (with rEFIt) and I can browse the internet with no problems, but I can't program on the Mac OS side in safe mode (it's really slow). Which is mainly what I use this Macbook for. Any ideas out there on how to avoid starting in safe mode?

    Read the article

  • Loadbalance UDP traffic with session affinity and way to take servers in & out of rotation

    - by William
    What is the best way to go about load balancing UDP traffic among a whole bunch of servers, while keeping session affinity based on the users' IP? I need to also be able to take servers in and out of rotation for new clients, so when they join for the first time, they get put on a server in a list of available servers, and clients already connected would stay connected to their specific server. I have written the software to maintain a list, but I can't seem to find anything that would perform this functionality. If you need the context, this is to facilitate game tournaments for Minecraft: Pocket Edition, which is done with UDP traffic, I cannot change the protocol. And, because tournaments open and close, I need to be able to place players on their proper servers. Performance is also a priority, I have a program to do this but it is very bloated and slow. Thanks for any help! William

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >