Search Results

Search found 19446 results on 778 pages for 'network printer'.

Page 721/778 | < Previous Page | 717 718 719 720 721 722 723 724 725 726 727 728  | Next Page >

  • Is Stream.Write thread-safe?

    - by Mike Spross
    I'm working on a client/server library for a legacy RPC implementation and was running into issues where the client would sometimes hang when waiting to a receive a response message to an RPC request message. It turns out the real problem was in my message framing code (I wasn't handling message boundaries correctly when reading data off the underlying NetworkStream), but it also made me suspicious of the code I was using to send data across the network, specifically in the case where the RPC server sends a large amount of data to a client as the result of a client RPC request. My send code uses a BinaryWriter to write a complete "message" to the underlying NetworkStream. The RPC protocol also implements a heartbeat algorithm, where the RPC server sends out PING messages every 15 seconds. The pings are sent out by a separate thread, so, at least in theory, a ping can be sent while the server is in the middle of streaming a large response back to a client. Suppose I have a Send method as follows, where stream is a NetworkStream: public void Send(Message message) { //Write the message to a temporary stream so we can send it all-at-once MemoryStream tempStream = new MemoryStream(); message.WriteToStream(tempStream); //Write the serialized message to the stream. //The BinaryWriter is a little redundant in this //simplified example, but here because //the production code uses it. byte[] data = tempStream.ToArray(); BinaryWriter bw = new BinaryWriter(stream); bw.Write(data, 0, data.Length); bw.Flush(); } So the question I have is, is the call to bw.Write (and by implication the call to the underlying Stream's Write method) atomic? That is, if a lengthy Write is still in progress on the sending thread, and the heartbeat thread kicks in and sends a PING message, will that thread block until the original Write call finishes, or do I have to add explicit synchronization to the Send method to prevent the two Send calls from clobbering the stream?

    Read the article

  • What is the fastest way for reading huge files in Delphi?

    - by dummzeuch
    My program needs to read chunks from a huge binary file with random access. I have got a list of offsets and lengths which may have several thousand entries. The user selects an entry and the program seeks to the offset and reads length bytes. The program internally uses a TMemoryStream to store and process the chunks read from the file. Reading the data is done via a TFileStream like this: FileStream.Position := Offset; MemoryStream.CopyFrom(FileStream, Size); This works fine but unfortunately it becomes increasingly slower as the files get larger. The file size starts at a few megabytes but frequently reaches several tens of gigabytes. The chunks read are around 100 kbytes in size. The file's content is only read by my program. It is the only program accessing the file at the time. Also the files are stored locally so this is not a network issue. I am using Delphi 2007 on a Windows XP box. What can I do to speed up this file access?

    Read the article

  • Easy way to lock a file on a remote machine (windows)?

    - by roufamatic
    I've tracked down an error in my logs, and am trying to reproduce it. My theory is that a file sometimes gets locked in a specific folder, and when the application (ASP.NET) tries to delete that folder it hangs. I don't have the application running on my own machine so I'm debugging this on a remote server. But for the life of me, I can't seem to figure out a way to lock a file that prevents it from being deleted by the process. My first thought was to map the network path to a local drive and just leave a command prompt open to that folder. Locally that always fouls up my folder deletes, but apparently SMB is a bit more robust and doesn't grant me a lock. After that I created an infinte loop vbscript in the folder and executed it remotely. The file was deleted out from underneath the executing code. Man! I then tried creating a file on the server in that folder and removing all permissions. That didn't do the trick. I don't have access to the IIS settings so perhaps it's running under a privileged system account. So: what's a program that you know is free and I can quickly use to create an exclusive lock on a file so I can test my delete theory? Like a really, really bad Notepad clone or something. :-)

    Read the article

  • apache web server configuration problem

    - by mohit
    i want to have apache server to serve only /var/www/ directory now it serves all my files on system from directory "/" i tried to edit httpd.conf placed in /etc/apache2 and placed the folllowing content in it(intially it was empty) <Directory /> Options None AllowOverride None </Directory> DocumentRoot "/var/www" <Directory "/var/www"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> then saved it,restarted apache server put the location /var/www in the web browser address bar,still it shows the higher level directories too then i edited the file Default,Default-ssl in the sites-available folder repeated the same process still apache serves all files on my system 2.when i try to use the following command gedit httpd.conf I get the error gedit:2696): EggSMClient-WARNING **: Failed to connect to the session manager: None of the authentication protocols specified are supported GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://projects.gnome.org/gconf/ for information. (Details - 1: Failed to get connection to session: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.)

    Read the article

  • Twitter API Rate Limit - Overcoming on an unauthenticated JSON Get with Objective C?

    - by Cian
    I see the rate limit is 150/hr per IP. This'd be fine, but my application is on a mobile phone network (with shared IP addresses). I'd like to query twitter trends, e.g. GET /trends/1/json. This doesn't require authorization, however what if the user first authorized with my application using OAuth, then hit the JSON API? The request is built as follows: - (void) queryTrends:(NSString *) WOEID { NSString *urlString = [NSString stringWithFormat:@"http://api.twitter.com/1/trends/%@.json", WOEID]; NSURL *url = [NSURL URLWithString:urlString]; NSURLRequest *theRequest=[NSURLRequest requestWithURL:url cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:10.0]; NSURLConnection *theConnection=[[NSURLConnection alloc] initWithRequest:theRequest delegate:self startImmediately:YES]; if (theConnection) { // Create the NSMutableData to hold the received data. theData = [[NSMutableData data] retain]; } else { NSLog(@"Connection failed in Query Trends"); } //NSData *data = [NSData dataWithContentsOfURL:[NSURL URLWithString:urlString]]; } I have no idea how I'd build this request as an authenticated one however, and haven't seen any examples to this effect online. I've read through the twitter OAuth documentation, but I'm still puzzled as to how it should work. I've experimented with OAuth using Ben Gottlieb's prebuild library, and calling this in my first viewDidLoad: OAuthViewController *oAuthVC = [[OAuthViewController alloc] initWithNibName:@"OAuthTwitterDemoViewController" bundle:[NSBundle mainBundle]]; // [self setViewController:aViewController]; [[self navigationController] pushViewController:oAuthVC animated:YES]; This should store all the keys required in the app's preferences, I just need to know how to build the GET request after authorizing! Maybe this just isn't possible? Maybe I'll have to proxy the requests through a server side application? Any insight would be appreciated!

    Read the article

  • Auditing front end performance on web application

    - by user1018494
    I am currently trying to performance tune the UI of a company web application. The application is only ever going to be accessed by staff, so the speed of the connection between the server and client will always be considerably more than if it was on the internet. I have been using performance auditing tools such as Y Slow! and Google Chrome's profiling tool to try and highlight areas that are worth targeting for investigation. However, these tools are written with the internet in mind. For example, the current suggestions from a Google Chrome audit of the application suggests is as follows: Network Utilization Combine external CSS (Red warning) Combine external JavaScript (Red warning) Enable gzip compression (Red warning) Leverage browser caching (Red warning) Leverage proxy caching (Amber warning) Minimise cookie size (Amber warning) Parallelize downloads across hostnames (Amber warning) Serve static content from a cookieless domain (Amber warning) Web Page Performance Remove unused CSS rules (Amber warning) Use normal CSS property names instead of vendor-prefixed ones (Amber warning) Are any of these bits of advice totally redundant given the connection speed and usage pattern? The users will be using the application frequently throughout the day, so it doesn't matter if the initial hit is large (when they first visit the page and build their cache) so long as a minimal amount of work is done on future page views. For example, is it worth the effort of combining all of our CSS and JavaScript files? It may speed up the initial page view, but how much of a difference will it really make on subsequent page views throughout the working day? I've tried searching for this but all I keep coming up with is the standard internet facing performance advice. Any advice on what to focus my performance tweaking efforts on in this scenario, or other auditing tool recommendations, would be much appreciated.

    Read the article

  • How to detect changing directory size in Perl

    - by materiamage
    Hello, I am trying to find a way of monitoring directories in Perl, in particular the size of a directory, and upon detecting a change in directory size, perform a particular action. The issue I have is with large files that require a noticeable amount of time to copy into this directory, i.e. 100MB. What happens (in Windows, not Unix) is the system reserves enough disk space for the entire file, even though the file is still copying in progress. This causes problems for me, because my script will try to perform an action on this file that has not finished copying over. I can easily detect directory size changes in Unix via 'du', but 'du' in Windows does not behave the same way. Are there any accurate methods of detecting directory size changes in Perl? Edit: Some points to clarify: - My Perl script is only monitoring a particular directory, and upon detecting a new file or a new directory, perform an action on this new file or directory. It is not copying any files; users on the network will be copying files into the directory I am monitoring. - The problem occurs when a new file or directory appears (copied, not moved) that is significantly large ( 100MB, but usually a couple GB) and my program fires before this copy completes - In Unix I can easily 'du' to see that the file/directory in question is growing in size, and take the appropriate action - In Windows the size is static, so I cannot detect this change - opendir/readdir/closedir is not feasible, as some of the directories that appear may contain thousands of files, and I want to avoid the overhead of Ideally I would like my program to be triggered on change, but I am not sure how to do this. As of right now it busy waits until it detects a change. The change in file/directory size is not in my control.

    Read the article

  • C++ iterators, default initialization and what to use as an uninitialized sentinel.

    - by Hassan Syed
    The Context I have a custom template container class put together from a map and vector. The map resolves a string to an ordinal, and the vector resolves an ordinal (only an initial string to ordinal lookup is done, future references are to the vector) to the entry. The entries are modified intrusively to contain a a bool "assigned" and an iterator_type which is a const_iterator to the container class's map. My container class will use RCF's serialization code (which models boost::serialization) to serialize my container classes to nodes in a network. Serializing iterator's is not possible, or a can of worms, and I can easily regenerate them onces the vectors and maps are serialized on the remote site. The Question I need to default initialize, and be able to test that the iterator has not been assigned to (if it is assigned it is valid, if not it is invalid). Since map iterators are not invalidated upon operations performed on it (unless of course items are removed :D) am I to assume that map<x,y>::end() is a valid sentinel (regardless of the state of the map -- i.e., it could be empty) to initialize to ? I will always have access to the parent map, I'm just unsure wheather end() is the same as the map contents change. I don't want to use another level of indirection (--i.e., boost::optional) to achieve my goal, I'd rather forego compiler checks to correct logic, but it would be nice if I didn't need to. Misc This question exists, but most of its content seems non-sense. Assigning a NULL to an iterator is invalid according to g++ and clang++. This is another similar question, but it focuses on the common use-cases of iterators, which generally tends to be using the iterator to iterate, ofcourse in this use-case the state of the container isn't meant to change whilst iteration is going on.

    Read the article

  • How to disable server-side caching on IIS 7.5 (asp net mvc3)

    - by troebr
    I'm struggling with my IIS setup regarding caching, here's a brief description of my problem: I'm making a site for mobile and non-mobile, sharing the same controllers. IE: mysite/page will serve either mysite/page.cshtml, or mysite/M/page.cshtml, depending on the device. Here's the catch, it worked fine with my local and integration environment (cassiini and iis 6), but on another machine (2008r2/iis 7.5), apparently there is an aggressive server-side caching policy: If I access the website from a desktop machine, I have the correct pages (desktop version) If now I use my mobile phone to access the site, I will have the desktop version, (which implies a server-side cache, my phone is not using the same network). On the contrary, if I were to restart the server and access the site using my phone first, then I will get the mobile version on my desktop (only for the pages I already visited of course). I tried 2 solutions so far: Disabling OutputCache from my Web.config: <httpModules> [..] <remove name="OutputCache" /> </httpModules> And unchecking "Enable output cache" in "Output Caching" for my site in IIS. What's bugging me is that I do not have this problem with my other server (iis 6.0), although caching is enabled on this one, which leads me to think it is related to iis 7 caching addition. My question is simple: how does one disable server-side caching on IIS 7.5? Thanks in advance for your iis lights!

    Read the article

  • Git from localhost to remotehost with a team of three

    - by Mark McDonnell
    Hi, I'm completely new to Git. I've only just worked out how to use Github in a basic way (e.g. push my local file changes to Github - so I've not done 'pulling' down of content from Github and 'merging' it into my localhost version or anything like that). I had a look over at this existing question - Git: localhost remote development remote production - but I think it may have been a bit advanced for me at this stage as I didn't quite understand the terminology that most of the people were using. What I would like to achieve is to have a local server set-up that my team of developers can all 'push' to/'pull' from etc. And then have that local server upload any updated files automatically to our web server so we could see the updates live in the browser. I'm happy to get a server set-up in the office running Mac OSX Server and then installing Git on it and then getting the devs to write a shell script to push to the remote server but only if it was fairly easy for the devs local git to push to this new local server. I'm not a network engineer so I don't know what would need to be set-up for that to work, I know obviously we could set-up the server to be accessible via a local ip address like 192.168.0.xxx but not sure how that works with pushing to a git repository on that server? Would that literally be something like doing this on my local machine: git remote add MyGitFile git://192.168.0.xxx/MyGitFile.git ? Any ideas or advice you can give to a total Git newbie trying to help his team get a better work flow. Kind regards, Mark

    Read the article

  • Detaching all entities of T to get fresh data

    - by Goran
    Lets take an example where there are two type of entites loaded: Product and Category, Product.CategoryId - Category.Id. We have available CRUD operations on products (not Categories). If on another screen Categories are updated (or from another user in the network), we would like to be able to reload the Categories, while preserving the context we currently use, since we could be in the middle of editing data, and we do not want changes to be lost (and we cannot depend on saving, since we have incomplete data). Since there is no easy way to tell EF to get fresh data (added, removed and modified), we thought of twp possible ways: 1) Getting products attached to context, and categories detached from context. This would mean that we loose the ability to access Product.Category.Name, which we do sometimes require, so we would need to manually resolve it (example when printing data). 2) detaching / attaching all Categories from current context. Context.ChangeTracker.Entries().Where(x => x.Entity.GetType() == typeof(T)).ForEach(x => x.State = EntityState.Detached); And then reload the categories, which will get fresh data. Do you find any problem with this second approach? We understand that this will require all constraints to be put on foreign keys, and not navigation properties, since when detaching all Categories, Product.Category navigation properties would be reset to null also. Also, there could be a potential performance problem, which we did not test, since there could be couple of thousand products loaded, and all would need to resolve navigation property when reloading. Which of the two do you prefer, and is there a better way (EF6 + .NET 4.0)?

    Read the article

  • Advice needed: cold backup for SQL Server 2008 Express?

    - by Mikey Cee
    What are my options for achieving a cold backup server for SQL Server Express instance running a single database? I have an SQL Server 2008 Express instance in production that currently represents a single point of failure for my application. I have a second physical box sitting at the installation that is currently doing nothing. I want to somehow replicate my database in near real time (a little bit of data loss is acceptable) to the second box. The database is very small and resources are utilized very lightly. In the case that the production server dies, I would manually reconfigure my application to point to the backup server instead. Although Express doesn't support log shipping, I am thinking that I could manually script a poor man's version of it, where I use batch files to take the logs and copy them across the network and apply them to the second server at 5 minute intervals. Does anyone have any advice on whether this is technically achievable, or if there is a better way to do what I am trying to do? Note that I want to avoid having to pay for the full version of SQL Server and configure mirroring as I think it is an overkill for this application. I understand that other DB platforms may present suitable options (eg. a MySQL Cluster), but for the purposes of this discussion, let's assume we have to stick to SQL Server.

    Read the article

  • Replacement for Hamachi for SVN access

    - by Piers
    My company has been using Hamachi to access our SVN repository for a number of years. We are a small yet widely distributed development team with each programmer in a different country working from home. The server is hosted by a non-techie in our central office. Hamachi is useful here since it has a GUI and supports remote management. This system worked well for a while, but recently I have moved to a country with poor internet speeds. Hamachi will no longer connect 99% of the time - instead I get a "Probing..." message that doesn't resolve. It's certain to be a latency issue, as the same laptop will connect without problems when I cross the border and connect using a different ISP with better speeds. So I really need to replace Hamachi with some other VPN/protocol that handles latency better. The techie managing the repository is not comfortable installing and configuring Apache or IIS, so it looks like HTTP is out. I tried to convince my boss to go for a web hosting company, but he doesn't trust a 3rd party with our source. Any other recommended options / experiences out there for accessing our SVN repos that would be as simple as Hamachi for setup; but be more tolerant of network latency issues?

    Read the article

  • Sending files using Winsock - optimal send() data length?

    - by Meta
    I am using Winsock with non-blocking sockets to send a file to a client. The way I'm doing it right now is that I read a chunk of 8192 bytes from the file, and then loop until all of it successfully goes through send() (obviously handling WSAEWOULDBLOCK as it occurs). I then move on and read the next 8192 bytes, and so on... Although I can use any other number than 8192 when I test the transfer on my local machine, once I try it over a network, it seems like 8191 is the largest number I can use. When I try to use any number higher than 8191 (starting with 8192), the file transfer becomes extremely slow (about 5 times slower). Is there any reason why 8191 is so special? I've done some more testing and it turns out that using 8000 is slightly faster (by 0.5%). If you understand why 8191 is so special, can you tell me if there is a number better than the others (better than 8000)? I have a feeling that it has something to do with the fact that the default send buffer allocated to the socket by Winsock is 8KB, but I don't understand why. It might also have something to do with the Nagle algorithm, but again, I'm not sure how. Note that I have not modified the SO_SNDBUF option nor the TCP_NODELAY option. Or am I doing this all wrong? What's the best way of sending a file over a non-blocking socket?

    Read the article

  • Debugging Post Request with Chrome Dev Tools

    - by benek
    I am trying to use Chrome Dev for debugging the following Angular post request : $http.post("http://picjboss.puma.lan:8880/fluxpousse/api/flow/createOrUpdateHeader", flowHeader) After running the statement with right-click / evaluate, I can see the post in the network panel with a pending state. How can I get the result or "commit" the request and leave easily this "pending" state from the dev console ? I am not yet very familiar with JS callbacks, some code is expected. Thanks. EDIT I have tried to run from the console : $scope.$apply(function(){$http.post("http://picjboss.puma.lan:8880/fluxpousse/api/flow/createOrUpdateHeader", flowHeader).success(function(data){console.log("error "+data)}).error(function(data){console.log("error "+data)})}) It returns : undefined EDIT The post I am trying to solve generate an HTTP 400. Here is the result : Request URL:http://picjboss.puma.lan:8880/fluxpousse/api/flow/createOrUpdateHeader Request Method:POST Status Code:400 Mauvaise Requ?te Request Headersview source Accept:application/json, text/plain, / Accept-Encoding:gzip,deflate,sdch Accept-Language:fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Content-Length:5354 Content-Type:application/json;charset=UTF-8 Cookie:JSESSIONID=285AF523EA18C0D7F9D581CDB2286C56 Host:picjboss.puma.lan:8880 Origin:http://picjboss.puma.lan:8880 Referer:http://picjboss.puma.lan:8880/fluxpousse/ User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36 X-Requested-With:XMLHttpRequest Request Payloadview source {refHeader:IDSFP, idEntrepot:619, codeEntreprise:null, codeBanniere:null, codeArticle:7,…} cessionPrice: 78 codeArticle: "7" codeBanniere: null codeDateAppro: null codeDateDelivery: null codeDatePrepa: null codeEntreprise: null codeFournisseur: null codeUtilisateur: null codeUtilisateurLastUpdate: null createDate: null dateAppro: null dateDelivery: null datePrepa: null hasAssortControl: null hasCadenceForce: null idEntrepot: 619 isFreeCost: null labelArticle: "Mayonnaise de DIJON" labelFournisseur: null listDetail: [,…] pcbArticle: 12 pvc: 78 qte: 78 refCommande: "ref" refHeader: "IDSFP" state: "CREATED" stockArticle: 1200 updateDate: null Response Headersview source Connection:close Content-Length:996 Content-Type:text/html;charset=utf-8 Date:Fri, 08 Nov 2013 15:19:30 GMT Server:Apache-Coyote/1.1 X-Powered-By:Servlet 2.5; JBoss-5.0/JBossWeb-2.1

    Read the article

  • How to use IO.popen to write-to and read-from a child process?

    - by mackenir
    I am running net share from a ruby script to delete a windows network share. If files on the share are in use, net share will ask the user if they want to go ahead with the deletion, so my script needs to inspect the output from the command, and write out Y if it detects that net share is asking it for input. In order to be able to write out to the process I open it with access flags "r+". When attempting to write to the process with IO#puts, I get an error: Errno::EPIPE: Broken pipe What am I doing wrong here? (The error occurs on the line net_share.puts "Y") (The question text written out by net share is not followed by a newline, so I am using IO#readpartial to read in the output.) def delete_network_share share_path command = "net share #{share_path} /DELETE" net_share = IO.popen(command, "r+") text = "" while true begin line = net_share.readpartial 1000 #read all the input that's available rescue EOFError break end text += line if text.scan("Do you want to continue this operation? (Y/N)").size > 0 net_share.puts "Y" net_share.flush #probably not needed? end end net_share.close end

    Read the article

  • Reporting Services "cannot connect to the report server database"

    - by Dano
    We have Reporting Services running, and twice in the past 6 months it has been down for 1-3 days, and suddenly it will start working again. The errors range from not being able to view the tree root in a browser, down to being able to insert parameters on a report, but crashing before the report can generate. Looking at the logs, there is 1 error and 1 warning which seem to correspond somewhat. ERROR:Event Type: Error Event Source: Report Server (SQL2K5) Event Category: Management Event ID: 107 Date: 2/13/2009 Time: 11:17:19 AM User: N/A Computer: ******** Description: Report Server (SQL2K5) cannot connect to the report server database. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. WARNING: always comes before the previous error Event code: 3005 Event message: An unhandled exception has occurred. Event time: 2/13/2009 11:06:48 AM Event time (UTC): 2/13/2009 5:06:48 PM Event ID: 2efdff9e05b14f4fb8dda5ebf16d6772 Event sequence: 550 Event occurrence: 5 Event detail code: 0 Process information: Process ID: 5368 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: ReportServerException Exception message: For more information about this error navigate to the report server on the local server machine, or enable remote errors. During the downtime we tried restarting everything from the server RS runs on, to the database it calls to fill reports with no success. When I came in monday morning it was working again. Anyone out there have any ideas on what could be causing these issues? Edit Tried both suggestions below several months ago to no avail. This issue hasn't arisen since, maybe something out of my control has changed....

    Read the article

  • Best way to catch a WCF exception in Silverlight?

    - by wahrhaft
    I have a Silverlight 2 application that is consuming a WCF service. As such, it uses asynchronous callbacks for all the calls to the methods of the service. If the service is not running, or it crashes, or the network goes down, etc before or during one of these calls, an exception is generated as you would expect. The problem is, I don't know how to catch this exception. Because it is an asynchronous call, I can't wrap my begin call with a try/catch block and have it pick up an exception that happens after the program has moved on from that point. Because the service proxy is automatically generated, I can't put a try/catch block on each and every generated function that calls EndInvoke (where the exception actually shows up). These generated functions are also surrounded by External Code in the call stack, so there's nowhere else in the stack to put a try/catch either. I can't put the try/catch in my callback functions, because the exception occurs before they would get called. There is an Application_UnhandledException function in my App.xaml.cs, which captures all unhandled exceptions. I could use this, but it seems like a messy way to do it. I'd rather reserve this function for the truly unexpected errors (aka bugs) and not end up with code in this function for every circumstance I'd like to deal with in a specific way. Am I missing an obvious solution? Or am I stuck using Application_UnhandledException? [Edit] As mentioned below, the Error property is exactly what I was looking for. What is throwing me for a loop is that the fact that the exception is thrown and appears to be uncaught, yet execution is able to continue. It triggers the Application_UnhandledException event and causes VS2008 to break execution, but continuing in the debugger allows execution to continue. It's not really a problem, it just seems odd.

    Read the article

  • How can I access a parent DOM from an iframe on a different domain?

    - by Dexter
    I have a website and my domain is registered through Network Solutions. I'm using their Web Forwarding feature which allows me to "mask" my domain so that when a user visits http://lucasmccoy.com they are actually seeing http://lucasmccoy.comlu.com/ through an HTML frame. The advantages of this are that the address bar still shows http://lucasmccoy.com/. The disadvantages are that I cannot directly edit the HTML page in which the frame is owned. For example, I cannot change the page title or favicon. I have tried doing it like so: $(function() { parent.document.title = 'Lucas McCoy'; }); But of course this gives me a JavaScript error: Unsafe JavaScript attempt to access frame with URL http://lucasmccoy.com/ from frame with URL http://lucasmccoy.comlu.com/. Domains, protocols and ports must match. I looked at this question attempting to do the same thing except the OP has access to the other pages HTML whereas I do not. Is there anyway in JavaScript/jQuery to make a cross-domain request to the DOM when you don't have access to that domain? Or is this something browsers just will not let happen for security reasons.

    Read the article

  • Proper way to scan a range of IP addresses

    - by Josh G
    Given a range of IP addresses entered by a user (through various means), I want to identify which of these machines have software running that I can talk to. Here's the basic process: Ping these addresses to find available machines Connect to a known socket on the available machines Send a message to the successfully established sockets Compare the response to the expected response Steps 2-4 are straight forward for me. What is the best way to implement the first step in .NET? I'm looking at the System.Net.NetworkInformation.Ping class. Should I ping multiple addresses simultaneously to speed up the process? If I ping one address at a time with a long timeout it could take forever. But with a small timeout, I may miss some machines that are available. Sometimes pings appear to be failing even when I know that the address points to an active machine. Do I need to ping twice in the event of the request getting discarded? To top it all off, when I scan large collections of addresses with the network cable unplugged, Ping throws a NullReferenceException in FreeUnmanagedResources(). !? Any pointers on the best approach to scanning a range of IPs like this?

    Read the article

  • Opening URL in New Tab doesn't work in existing, programmatically-opened New Window (Firefox)

    - by seth
    I am building a web app, for myself, to control some servers on my home network, and discovered what I think is very odd behavior in Firefox. If you open a pop-up, via javascript, in Firefox, is it then impossible to open a new tab, via javascript in that pop-up? If not impossible, how do you do it? Given a clean, default Firefox 3.6.3 installation... If I open a page in Firefox and then call var my_window = window.open('http://www.google.com','_blank','top=10'); A brand new "pop-up" window opens. However, if instead I call var my_window = window.open('http://www.google.com'); A get a new tab. HOWEVER... If I call the first version var my_window = window.open('http://www.google.com','_blank','top=10'); And then in the new "pop-up" that opens, I call var my_window = window.open('http://www.google.com'); It opens a new tab in the original window, not a new tab in the pop-up. This seems very odd, and not intuitive at all. Why would the call in the pop-up open a tab in the "parent" window?

    Read the article

  • What technology should I use to write my game?

    - by Alon
    I have a great idea for a 3D network game, and I've concluded that it is possible to write it in Java as an applet which will live under the web browser, just like a full software in C++. And it will look and feel the same. The main advantage of Java on C++ is that with Java you can play without downloading any software. I have already thought about the download of the graphics, sound, etc but I found a solution for it. RuneScape just proves that it is possible. So my first question is, should my game live on a web browser or on the operating system? I think that in a web browser it is much more portable, although you need install Java and stuff. But the fact is, that most MMO games are currently not in the web. If you suggest in a software so please suggest a language either - C++ or something more productive like Python or C#? So after choosing a language, I need a graphics solution. Should I write directly with OpenGL/DirectX or use a game engine? What game engine should I use? Ogre? jMonkeyEngine? What's your opinion? Thank you! P.S: Please don't use answers like "Use what you know".

    Read the article

  • How to control virtual memory management in linux?

    - by chmike
    I'm writing a program that uses an mmap file to hold a huge buffer organized as an array of 64 MB blocks. The blocks are used to aggregate data received from different hosts through the network. As a consequence the total data size written in each block is not known in advance. Most of the time it is only 2MB but in some cases it can be up to 20MB or more. The data doesn't stay long in the buffer. 90% is deleted after less than a second and the rest is transmitted to another host. I would like to know if there is a way to tell the virtual memory manager that ram pages are not dirty anymore when data is deleted. Should I use mmap and munmap when a block is used and released to control the virtual memory ? What would be the overhead of doing this ? Also, some colleagues expressed concerns about the performance impact of allocating such a big mmap space. I expect it to behave like a swap file so that only dirty pages are to be considered.

    Read the article

  • CORBA on MacOS X (Cocoa)

    - by user8472
    I am currently looking into different ways to support distributed model objects (i.e., a computational model that runs on several different computers) in a project that initially focuses on MacOS X (using Cocoa). As far as I know there is the possibility to use the class cluster around NSProxy. But there also seem to be implementations of CORBA around with Objective-C support. At a later time there may be the need to also support/include Windows machines. In that case I would need to use something like Gnustep on the Windows side (which may be an option, if it works well) or come up with a combination of both technologies. Or write something manually (which is, of course, the least desirable option). My questions are: If you have experience with both technologies (Cocoa native infrastructure vs. CORBA) can you point out some key features/issues of either approach? Is it possible to use Gnustep with Cocoa in the way explained above? Is it possible (and reasonably feasible, i.e. simpler than writing a network layer manually) to communicate among all MacOS clients using Cocoa's technology and with Windows clients through CORBA?

    Read the article

  • Double hashing passwords - client & server

    - by J. Stoever
    Hey, first, let me say, I'm not asking about things like md5(md5(..., there are already topics about it. My question is this: We allow our clients to store their passwords locally. Naturally, we don't want them stored in plan text, so we hmac them locally, before storing and/or sending. Now, this is fine, but if this is all we did, then the server would have the stored hmac, and since the client only needs to send the hmac, not the plain text password, an attacker could use the stored hashes from the server to access anyone's account (in the catastrophic scenario where someone would get such an access to the database, of course). So, our idea was to encode the password on the client once via hmac, send it to the server, and there encode it a second time via hmac and match it against the stored, two times hmac'ed password. This would ensure that: The client can store the password locally without having to store it as plain text The client can send the password without having to worry (too much) about other network parties The server can store the password without having to worry about someone stealing it from the server and using it to log in. Naturally, all the other things (strong passwords, double salt, etc) apply as well, but aren't really relevant to the question. The actual question is: does this sound like a solid security design ? Did we overlook any flaws with doing things this way ? Is there maybe a security pattern for something like this ?

    Read the article

< Previous Page | 717 718 719 720 721 722 723 724 725 726 727 728  | Next Page >