Search Results

Search found 19533 results on 782 pages for 'machine specifications'.

Page 716/782 | < Previous Page | 712 713 714 715 716 717 718 719 720 721 722 723  | Next Page >

  • HTML relative links on various domains

    - by Adam Kiss
    I have quickie: When you code/develop themes, how do you link to various files in your html/css code? Example: We at our firm use mostly <base target="http://whatever"> in our main template and then just <img src="./images/file.png"> in our html, "/category/page" as links and something alike in our css. However, when testing on different machines, we use ip address rather than localhost on main dev station of coder, so all base links don't work (because localhost goes to viewing machine, not coder's, in our network). Same thing happens when updating pages - on dev server, we have to edit base target, so browsing site won't take us to live site - this part is actually rather simple PHP (if ... echo else echo something else), but it still not solve problem of more coding-testing problems. So, my question is, how do YOU solve it? How do you use relative links, which basically don't care for what domain is the page on and don't care for url rewrite? (because ../images/ is different for / and different for /something/somethingElse/page)?

    Read the article

  • How to design authentication in a thick client, to be fail safe?

    - by Jay
    Here's a use case: I have a desktop application (built using Eclipse RCP) which on start, pops open a dialog box with 'UserName' and 'Password' fields in it. Once the end user, inputs his UserName and Password, a server is contacted (a spring remote-servlet, with the client side being a spring httpclient: similar to the approaches here.), and authentication is performed on the server side. A few questions related to the above mentioned scenario: If said this authentication service were to go down, what would be the best way to handle further proceedings? Authentication is something that I cannot do away with. Would running the desktop client in a "limited" mode be a good idea? For instance, important features/menus/views will be disabled, rest of the application will be accessible? Should I have a back up authentication service running on a different machine, working as a backup? What are the general best-practices in this scenario? I remember reading about google gears and how it would let you edit and do stuff offline - should something like this be designed? Please let me know your design/architectural comments/suggestions. Appreciate your help.

    Read the article

  • Mulltiple configurations in Qt

    - by user360607
    Hi all! I'm new to Qt Creator and I have several questions regarding multiple build configurations. A side note: I have the QtCreator 1.3.1 installed on my Linux machine. I need to have two configurations in my Qt Creator project. The thing is that these aren't simply debug and release but are based on the target architecture - x86 or x64. I came across http://stackoverflow.com/questions/2259192/building-multiple-targets-in-qt-qmake and from that I went trying something like: Conf_x86 { TARGET = MyApp_x86 } Conf_x64 { TARGET = MyApp_x64 } This way however I don't seems to be able to use the Qt Creator IDE to build each of these separately (Build All, Rebuild All, etc. options from the IDE menu). Is there a way to achieve this - may be even show Conf_x86 and Conf_x64 as new build configurations in Qt Creator? One other thing the Qt I have is 64 bit so by default the target built using Qt Creator IDE will also be 64 bit. I noticed that the effective qmake call in the build step includes the following option '-spec linux-g++-64'. I also noticed that should I add '-spec linux-g++-32' in 'Additional arguments' it would override '-spec linux-g++-64' and the resulting target will be 32 bit. How can I achieve this by simply editing the contents of the .pro file? I saw that all these changes are initially saved in the .pro.user file but does doesn't suit me at all. I need to be able to make these configurations from the .pro file if possible. Any help will be appreciated. 10x in advance!

    Read the article

  • asp.net 3.5 app - can not load asemblies, "Strong name signature could not be verified", only when d

    - by hitsolutions
    Have developed an asp.net 3.5 application which consists of a we-site, some developed assemblies and some 3rd party assembles such as Telerik, Jayrock etc, all very much standard 3rd party apps. Created and built this app, tested on Win 2008 Eval running on a VM, all fine. Imagine my frustration when after installing on clients production Win 2008 server, that the app could not run and the error message was the "Strong name signature could not be verified. The assembly may have been tampered with, or it was delay signed ..." one. This was for all assembles in app (removed one and this kept popping up for a different assembly). Attempted to install on a machine on the network and received the same error. I am fairly baffled and a little freaked as I can not figure this out and time is rapidly running out. Have inspected all parts of server I know about (.NET, IIS7) but all seems fine. What could cause this? It sounds like there is a stricter security manifest on the production server - but where would I look and for what? It must be a group policy. only other item is that the machines are running Symantec ante-virus. The IT head is on hols so can't quiz him which is also frustrating - but as they say time waits for no man!

    Read the article

  • Why won't my Setup Project Perform my Custom Registration Process

    - by Jordan S
    I am trying to write an Setup Project/Installer for a class library driver that I wrote in C# using Visual Studio 2008. The driver project has a section of code that looks like this... [ComRegisterFunction] public static void RegisterASCOM(Type t) { Trace.WriteLine("Registration Started."); DoRegistration(true); } In the driver project Properties - "Assembly Information" I have set checked the box that says Make COM-Visible = true. I added a Setup Project to the solution in VS, added the output dll from the driver project so that it installs on the target machine and set the Register property of the dll to "vsdraCOM". So, my understanding is that when the installer runs it SHOULD execute the methods of the dll that are marked with [COMRegisterFunction]. Using SysInternals Debug View I can monitor when the above code snippet is hit by watching for the "Registration started" text to show up in the window. When I build the solution, I can see the text show up so I know the driver is registering properly. The problem is that when I run the installer, I don't think it is doing the registration bit. I see nothing show up in Debug View. And if i try to access my driver via another application I get an error saying it "Cannot create ActiveX object". Why does the registration not occur during the install process? The driver does register for COM but it does NOT call my custom registration method. Does anyone have and suggestions of what I could be missing? Is there another way I can debug this? (I can provide more code if anyone want's to take a look!!)

    Read the article

  • em1: watchdog timeout -- resetting virtualbox

    - by lightmanhk
    everyone. I try to configure network adapter of my freebsd on virtualbox machine. At first setting up, I got it right. However, after a couple of reboots, my VM gives me a error message: says "em1:Watchdog timeout -- resetting" I tried to google it, and there is no solution that works for me. The following is my network adapter of my VM: Apapter1: NAT Adapter2: Host-only Adapter vboxnet0 On my Virtualbox setting: Host-only networks: Adapter: IPv4 address: 192.168.56.10 IPv4 network mask: 255.255.255.0 On my FreeBSD VM /etc/rc.conf ifconfig_em0="DHCP" ifconfig_em1="inet 192.168.56.11 netmask 255.255.255.0" Moreover, strange thing is that, for example, if timeout error message show at ttyv0 , when I log in at different ttyv[1-9], the error message will not be shown. but in ttyv0, it still gives me the message. And, in term of functionality of em1, it does work. I have no clue about this problem. Hoping any one can give me a solution or suggestion. That will be great! Thanks a lot. Freebsd version: 9.0-RELEASE FreeBSD 9.0-RELEASE i386 Virtual box: 4.1.2 r73507

    Read the article

  • Cannot open a SQL2000 DTS package I imported into SQL2008

    - by RJ
    I am running into a problem trying to open a SQL2000 DTS package I imported into SQL2008. I set up a new server and installed a fresh install of SQL2008. The database I need to run is a SQL2000 database. I moved the database over with no problem but there are a few DTS packages that need to run in legacy on SQL2008. I exported the DTS packages I need out of SQL2000 and imported them successfully into SQL2008. My SQL2008 server is x64. I can see the DTS packages under Data Transformation Service in Legacy but when I try to open the package I get this message. "SQL Server 2000 DTS Designer components are required to edit DTS packages. Install the special web download, "SQL Server 2000 DTS Designer components" to use this feature. (Microsoft.SqlServer.DtsObjectExplorerUI)" I downloaded the components and installed them and still get this error. I researched and found an article about this not working on x64 so I have an x86 machine that I installed the SQL2008 tools and tried to open the package from there and got the same error. I have spent days on this and need help. Has anyone run across this and can tell me what to do. If you have solved this problem, please help me out. Thanks.

    Read the article

  • Why is PLINQ slower than LINQ for this code?

    - by Rob Packwood
    First off, I am running this on a dual core 2.66Ghz processor machine. I am not sure if I have the .AsParallel() call in the correct spot. I tried it directly on the range variable too and that was still slower. I don't understand why... Here are my results: Process non-parallel 1000 took 146 milliseconds Process parallel 1000 took 156 milliseconds Process non-parallel 5000 took 5187 milliseconds Process parallel 5000 took 5300 milliseconds using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; namespace DemoConsoleApp { internal class Program { private static void Main() { ReportOnTimedProcess( () => GetIntegerCombinations(), "non-parallel 1000"); ReportOnTimedProcess( () => GetIntegerCombinations(runAsParallel: true), "parallel 1000"); ReportOnTimedProcess( () => GetIntegerCombinations(5000), "non-parallel 5000"); ReportOnTimedProcess( () => GetIntegerCombinations(5000, true), "parallel 5000"); Console.Read(); } private static List<Tuple<int, int>> GetIntegerCombinations( int iterationCount = 1000, bool runAsParallel = false) { IEnumerable<int> range = Enumerable.Range(1, iterationCount); IEnumerable<Tuple<int, int>> integerCombinations = from x in range from y in range select new Tuple<int, int>(x, y); return runAsParallel ? integerCombinations.AsParallel().ToList() : integerCombinations.ToList(); } private static void ReportOnTimedProcess( Action process, string processName) { var stopwatch = new Stopwatch(); stopwatch.Start(); process(); stopwatch.Stop(); Console.WriteLine("Process {0} took {1} milliseconds", processName, stopwatch.ElapsedMilliseconds); } } }

    Read the article

  • Why are (almost) all the on-line games written in ActionScript (Flash) not Java?

    - by MasterPeter
    I absolutely love good defender games (e.g. Gemcraft, Protector: reclaiming the throne) as they can be intellectually quite challenging; it's like playing chess but a little less thinking a bit more action. Sadly, there are not that many good ones out there and I thought I would create one myself and share it with the rest of the world by making it available on-line. I have never worked with ActionScript but when it comes to on-line games, this is the main choice. I have tried to find a decent 2D game in the form of a Java applet but to no avail. Why is this so? I could write the game, most comfortably, in Delphi for Win32 but then people would need to download the executable, which could deter some form downloading it, and also it would only work on Windows. I am also familiar with Java, having worked with Java for the last four years or so. Although I don't have much experience with games programming. Should I note be deterred by the fact that all online games are written for in Flash and create my defender game as a Java applet, or should I consider learning ActionScript and games development for the ActionScript Virtual Machine (AS3 looks very much like Java... but still, it's an entirely new technology to me and I might never use it professionally.) Could you, please, just answer the the question in the title? Why Flash, not Java applets? Is it only 'politics'?

    Read the article

  • Different programming languages possibilities

    - by b-gen-jack-o-neill
    Hello. This should be very simple question. There are many programming languages out there, compiled into machine code or managed code. I first started with ASM back in high school. Assembler is very nice, since you know what exactly CPU does. Next, (as you can see from my other questions here) I decided to learn C and C++. I choosed C becouse from what I read it is the language with output most close to assembler-written programs. But, what I want to know is, can any other Windows programming language out there call win32 API? To be exact, like C has its special header and functions for win32 api interactions, is this assumed to be some important part of programming language? Or are there any languages that have no support for calling win32 API, or just use console to IO and some functions for basic file IO? Becouse, for Windows programming with graphic output, it is essential to have acess to win32 API. I know this question might seem silly, but still please, help me, I ask for study porposes. Thanks.

    Read the article

  • Simple problem with mod_rewrite in the Fat Free Framework

    - by ian
    I am trying to setup and learn the Fat Free Framework for PHP. http://fatfree.sourceforge.net/ It's is fairly simple to setup and I am running it on my machine using MAMP. I was able to get the 'hello world' example running just fin: require_once 'path/to/F3.php'; F3::route('GET /','home'); function home() { echo 'Hello, world!'; } F3::run(); But when I try to add in the second part, which has two routes: require_once 'F3/F3.php'; F3::route('GET /','home'); function home() { echo 'Hello, world!'; } F3::route('GET /about','about'); function about() { echo 'About Us.'; } F3::run(); I get a 404 error if I try the second URL: /about Not sure why one of the mod_rewrite commands would be working and not the other. Below is my .htaccess file: # Enable rewrite engine and route requests to framework RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-l RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php [L,QSA] # Disable ETags Header Unset ETag FileETag none # Default expires header if none specified (stay in browser cache for 7 days) <IfModule mod_expires.c> ExpiresActive On ExpiresDefault A604800 </IfModule>

    Read the article

  • WCF newbie - how to install and use a SSL certificate?

    - by Shaul
    This should be a snap for anyone who's done it before... I'm trying to set up a self-hosted WCF service using NetTcpBinding. I got a trial SSL certificate from Thawte and successfully installed that in my IIS store, and I think I've got it correctly set up in the service - at least it doesn't exception out on me! Now, I'm trying to connect the client (this is still all on my dev machine), and it's giving me an error, "Message = "The X.509 certificate CN=ssl.mydomain.com, OU=For Test Purposes Only. No assurances., OU=IT, O=My Company, L=My Town, S=None, C=IL chain building failed. The certificate that was used has a trust chain that cannot be verified. Replace the certificate or change the certificateValidationMode. A certificate chain processed, but terminated in a root certificate which is not trusted by the trust provider." Ooookeeeey... now what? Client code (I want to do this in code, not app.config): var baseAddress = "localhost"; var factory = new DuplexChannelFactory<IMyWCFService>(new InstanceContext(SiteServer.Instance)); factory.Endpoint.Address = new EndpointAddress("net.tcp://{0}:8000/".Fmt(baseAddress)); var binding = new NetTcpBinding(SecurityMode.Message); binding.Security.Message.ClientCredentialType = MessageCredentialType.UserName; factory.Endpoint.Binding = binding; var u = factory.Credentials.UserName; u.UserName = userName; u.Password = password; return factory.CreateChannel()

    Read the article

  • Rails upload file to ftp server

    - by Bob
    I'm on Rails 2.3.5 and Ruby 1.8.6 and trying to figure out how to let a user upload a file to a FTP server on a different machine than my Rails app. Also my Rails app will be hosted on Heroku which doesn't facilitate the writing of files to the local filesystem. index.html.erb <% form_tag '/ftp/upload', :method => :post, :multipart => true do %> <label for="file">File to Upload</label> <%= file_field_tag "file" %> <%= submit_tag 'Upload' %> <% end %> ftp_controller.rb require 'net/ftp' class FtpController < ApplicationController def upload file = params[:file] ftp = Net::FTP.new('remote-ftp-server') ftp.login(user = "***", passwd = "***") ftp.puttextfile(file.read, File.basename(file.original_filename)) ftp.quit() end def index end end Currently I'm just trying to get the Rails app to work on my Windows laptop. With the above code, I'm getting this error Errno::ENOENT in FtpController#upload No such file or directory -.... followed by a dump of the file contents Anyone knows what's going on?

    Read the article

  • TCP Socket.Connect is generating false positives

    - by Mark
    I'm experiencing really weird behavior with the Socket.Connect method in C#. I am attempting a TCP Socket.Connect to a valid IP but closed port and the method is continuing as if I have successfully connected. When I packet sniffed what was going on I saw that the app was receiving RST packets from the remote machine. Yet from the tracing that is in place it is clear that the connect method is not throwing an exception. Any ideas what might be causing this? The code that is running is basically this IPEndPoint iep = new IPEndPoint(System.Net.IPAddress.Parse(m_ipAddress), m_port); Socket tcpSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); tcpSocket.Connect(iep); To add to the mystery... when running this code in a stand alone console application, the result is as expected – the connect method throws an exception. However, when running it in the Windows Service deployment we have the connect method does not throw an exception. Edit in response to Mystere Man's answer How would the exception be swallowed? I have a Trace.WriteLine right above the .Connect method and a Trace.WriteLine right under it (not shown in the code sample for readability). I know that both traces are running. I also have a try catch around the whole thing which also does a Trace.Writeline and I don't see that in the log files anywhere. I have also enabled the internal socket tracing as you suggested. I don't see any exceptions. I see what appears to be successful connections. I am trying to identify differences between the windows service app and the diagnostic console app I made. I am running out of ideas though End edit Thanks

    Read the article

  • More efficient R / Sweave / TeXShop work-flow?

    - by user594795
    I've now got everything to work properly on my Mac OS X 10.6 machine so that I can create decent looking LaTeX documents with Sweave that include snippets of R code, output, and LaTeX formatting together. Unfortunately, I feel like my work-flow is a bit clunky and inefficient: Using TextWrangler, I write LaTeX code and R code (surrounded by <<= above and @ below R code chunk) together in one .Rnw file. After saving changes, I call the .Rnw file from R using the Sweave command Sweave(file="/Users/mymachine/Documents/Assign4.Rnw", syntax="SweaveSyntaxNoweb") In response, R outputs the following message: You can now run LaTeX on 'Assign4.tex' So then I find the .tex file (Assign4.tex) in the R directory and copy it over to the folder in my documents ~/Documents/ where the .Rnw file is sitting (to keep everything in one place). Then I open the .tex file (e.g. Assign4.tex) in TeXShop and compile it there into pdf format. It is only at this point that I get to see any changes I have made to the document and see if it 'looks nice'. Is there a way that I can compile everything with one button click? Specifically it would be nice to either call Sweave / R directly from TextWrangler or TeXShop. I suspect it might be possible to code a script in Terminal to do it, but I have no experience with Terminal. Please let me know if there's any other things I can do to streamline or improve my work flow.

    Read the article

  • Can i change the identity of the logged in user in ASP.net?

    - by Rising Star
    I have an ASP.net application I'm developing authentication for. I am using an existing cookie-based log on system to log users in to the system. The application runs as an anonymous account and then checks the cookie when the user wants to do something restricted. This is working fine. However, there is one caveat: I've been told that for each page that connects to our SQL server, I need to make it so that the user connects using an Active Directory account. because the system I'm using is cookie based, the user isn't logged in to Active Directory. Therefore, I use impersonation to connect to the server as a specific account. However, the powers that be here don't like impersonation; they say that it clutters up the code. I agree, but I've found no way around this. It seems that the only way that a user can be logged in to an ASP.net application is by either connecting with Internet Explorer from a machine where the user is logged in with their Active Directory account or by typing an Active Directory username and password. Neither of these two are workable in my application. I think it would be nice if I could make it so that when a user logs in and receives the cookie (which actually comes from a separate log on application, by the way), there could be some code run which tells the application to perform all network operations as the user's Active Directory account, just as if they had typed an Active Directory username and password. It seems like this ought to be possible somehow, but the solution evades me. How can I make this work?

    Read the article

  • Where are the network boundaries in the Java Connector Architecture (JCA)?

    - by Laird Nelson
    I am writing a JCA resource adapter. I'm also, as I go, trying to fully understand the connection management portion of the JCA specification. As a thought experiment, pretend that the only client of this adapter will be a Swing Java Application Client located on a different machine. Also assume that the resource adapter will communicate with its "enterprise information system" (EIS) over the network as well. As I understand the JCA specification, the .rar file is deployed to the application server. The application server creates the .rar file's implementation of the ManagedConnectionFactory interface. It then asks it to produce a connection factory, which is the opaque object that is deployed to JNDI for the user to use to obtain a connection to the resource. (In the case of JDBC, the connection factory is a javax.sql.DataSource.) It is a requirement that the connection factory retain a reference to the application-server-supplied ConnectionManager, which, in turn, is required to be Serializable. This makes sense--in order for the connection factory to be stored in JNDI, it must be serializable, and in order for it to keep a reference to the ConnectionManager, the ConnectionManager must also be serializable. So fine, this little object graph gets installed in the application client's JNDI tree. This is where I start to get queasy. Is the ConnectionManager--the piece supplied by the application server that is supposed to handle connection management, sharing, pooling, etc.--wholly present on the client at this point? One of its jobs is to create ManagedConnection instances, and a ManagedConnection is not required to be Serializable, and the user connection handles it vends are also not required to be Serializable. That suggests to me that the whole connection pooling machinery is shipped wholesale to the application client and stuffed into its JNDI tree. Does this all mean that JCA interactions from the client side bypass the server-side componentry of the application server? Where are the network boundaries in the JCA API?

    Read the article

  • Integrated Windows authentication in IIS causing ADO.NET failure

    - by TrueWill
    We have a .NET 3.5 Web Service running under IIS. It must use identity impersonate="true" and Integrated Windows authentication in order to authenticate to third-party software. In addition, it connects to a SQL Server database using ADO.NET and SQL Server Authentication (specifying a fixed User ID and Password in the connection string). Everything worked fine until the database was moved to another SQL Server. Then the Web Service would throw the following exception: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) This error only occurs if identity impersonate is true in the Web.config. Again, the connection string hasn't changed and it specifies the user. I have tested the connection string and it works, both under the impersonated account and under the service account (and from both the remote machine and the server). What needs to be changed to get this to work with impersonation?

    Read the article

  • Sharepoint fails to load a C++ dll on windows 2008

    - by Nathan
    I have a sharepoint DLL that does some licensing things and as part of the code it uses an external C++ DLL to get the serial number of the hardisk. When i run this application on windows server 2003 it works fine, but on 2008 the whole site (loaded on load) crashes and resets continually. This is not 2008 R2 and is the same in 64 or 32 bits. If i put a debugger.break before the dll execution then I see the code get to the point of the break then never come back into the dll again. I do get some debug assertion warnings from within the function, again only in 2008, but im not sure this is related. I created a console app that runs the c# dll, which in turn loads the c++ dll, and this works perfectly on 2008 (although does show the assertion errors, but I have suppressed these now). The assertion errors are not in my code but within ICtypes.c and not something I can debug. If i put a breakpoint in the DLL it is never hit and the compiler says : "step in: Stepping over non user code" if i try to debug into the DLL using VS. I have tried wrapping the code used to call the DLL in: SPSecurity.RunWithElevatedPrivileges(delegate() but this also does not help. I have the sourcecode for this DLL so that is not a problem. If i delete the DLL from the directory I get an error about a missing DLL, if i replace it back to no error or warning just a complete failure. If i replace this code with a hardcoded string the whole application works fine. Any advice would be much appreciated, I can't understand why it works as a console app yet not when run by sharepoint, this is with the same user account, on the same machine... This is the code used to call the DLL: [DllImport("idDll.dll", EntryPoint = "GetMachineId", SetLastError = true)] extern static string GetComponentId([MarshalAs(UnmanagedType.LPStr)]String s); public static string GetComponentId() { Debugger.Break(); if (_machine == string.Empty) { string temp = ""; id= ComponentId.GetComponentId(temp); } return id; }

    Read the article

  • Is it safe to delete rotated MySQL binary logs?

    - by Milan Babuškov
    I have a MySQL server with binary logging active. Once a days logs file is "rotated", i.e. MySQL seems to stop writing to it and creates and new log file. For example, I currently have these files in /var/lib/mysql -rw-rw---- 1 mysql mysql 10485760 Jun 7 09:26 ibdata1 -rw-rw---- 1 mysql mysql 5242880 Jun 7 09:26 ib_logfile0 -rw-rw---- 1 mysql mysql 5242880 Jun 2 15:20 ib_logfile1 -rw-rw---- 1 mysql mysql 1916844 Jun 6 09:20 mybinlog.000004 -rw-rw---- 1 mysql mysql 61112500 Jun 7 09:26 mybinlog.000005 -rw-rw---- 1 mysql mysql 15609789 Jun 7 13:57 mybinlog.000006 -rw-rw---- 1 mysql mysql 54 Jun 7 09:26 mybinlog.index and mybinlog.000006 is growing. Can I simply take mybinlog.000004 and mybinlog.000005, zip them up and transfer to another server, or I need to do something else before? What info is stored in mybinlog.index? Only the info about the latest binary log? UPDATE: I understand I can delete the logs with PURGE BINARY LOGS which updates mybinlog.index file. However, I need to transfer logs to another computer before deleting them (I test if backup is valid on another machine). To reduce the transfer size, I wish to bzip2 the files. What will PURGE BINARY LOGS do if log files are not "there" anymore?

    Read the article

  • Controller actions appear to be synchronous though on different requests?

    - by Oded
    I am under the impression that the below code should work asynchronously. However, when I am looking at firebug, I see the requests fired asynchronously, but the results coming back synchronously: Controller: [HandleError] public class HomeController : Controller { public ActionResult Status() { return Content(Session["status"].ToString()); } public ActionResult CreateSite() { Session["status"] += "Starting new site creation"; Thread.Sleep(20000); // Simulate long running task Session["status"] += "<br />New site creation complete"; return Content(string.Empty); } } Javascript/jQuery: $(document).ready(function () { $.ajax({ url: '/home/CreateSite', async: true, success: function () { mynamespace.done = true; } }); setTimeout(mynamespace.getStatus, 2000); }); var mynamespace = { counter: 0, done: false, getStatus: function () { $('#console').append('.'); if (mynamespace.counter == 4) { mynamespace.counter = 0; $.ajax({ url: '/home/Status', success: function (data) { $('#console').html(data); } }); } if (!mynamespace.done) { mynamespace.counter++; setTimeout(mynamespace.getStatus, 500); } } } Addtional information: IIS 7.0 Windows 2008 R2 Server Running in a VMWare virutual machine Can anyone explain this? Shouldn't the Status action be returning practically immediately instead of waiting for CreateSite to finish? Edit: How can I get the long running process to kick off and still get status updates?

    Read the article

  • RPC command to initiate a software install

    - by ericmayo
    I was recently working with a product from Symantech called Norton EndPoint protection. It consists of a server console application and a deployment application and I would like to incorporate their deployment method into a future version of one of my products. The deployment application allows you to select computer workstations running Win2K, WinXP, or Win7. The selection of workstations is provided from either AD (Active Directory) or NT Domain (WINs/DNS NetBIOS lookup). From the list, one can click and choose which workstations to deploy the end point software which is Symantech's virus & spyware protection suite. Then, after selecting which workstations should receive the package, the software copies the setup.exe program to each workstation (presumable over the administrative share \pcname\c$) and then commands the workstation to execute setup.exe resulting in the workstation installing the software. I really like how their product works but not sure what they are doing to accomplish all the steps. I've not done any deep investigations into this such as sniffing the network, etc... and wanted to check here to see if anyone is familiar with what I'm talking about and if you know how it's accomplished or have ideas how it could be accomplished. My thinking is that they are using the admin share to copy the software to the selected workstations and then issuing an RPC call to command the workstation to do the install. What's interesting is that the workstations do this without any of the logged in users knowing what's going on until the very end where a reboot is necessary. At which point, the user gets a pop-up asking to reboot now or later, etc... My hunch is that the setup.exe program is popping this message. To the point: I'm looking to find out the mechanism by which one Windows based machine can tell another to do some action or run some program. My programming language is C/C++ Any thoughts/suggestions appreciated.

    Read the article

  • pushd - handling multiple drives from cmd

    - by user673600
    I'm trying to figure out how to install some programs where the components reside on two different drives on a networked path. However whenever I use pushd \\xyz\c$ I get a mapped drive which means I cannot use any knowledge of using for example c:\install e:\mycomponents.dll. Is there anyway that I can do this once I have used the pushd command? How can I ensure that I keep the drives the same for example. I'm in the process of installing services. So it seems that when I install the service, I need to keep the path as the same as the actual location of the .exe which means that I'm running into issues. Is there a way to simply use pushd but at the sametime not actually map drives? As when installing services, when I've been using net use, I've found that there is an issue with installing on drives which are mapped, as the service whilst can be installed doesn't find the actual .exe when it comes to starting up the service. So to expand this, is there a way to solve this using net use or pushd or a combination that lets me install a service as such: c:\windows\..\installutil e:\mynode? So to clarify, I need to somehow be able to see both drives on the remote machine by their relative drives i.e. E:\ and C:\ - if I use a mapped drive letter then it means installing service is a pain because I cannot use the path.

    Read the article

  • Hundreds of custom UserControls create thousands of USER Objects

    - by Andy Blackman
    I'm creating a dashboard application that shows hundreds of "items" on a FlowLayoutPanel. Each "item" is a UserControl that is made up of 12 or labels. My app queries a database and then creates an "item" instance for each record, populating ethe labels and textboxes with data before adding it to the FlowLayoutPanel. After adding about 560 items to the panel, I noticed that the USER Objects count in my Task Manager had gone up to about 7300, which was much much larger than any other app on my machine. I did a quick spot of mental arithmetic (OK, I might have used calc.exe) and figured that 560 * 13 (12 labels plus the UserControl itself) is 7280. So that suddenly gave away where all the objects were coming from... Knowing that there is a 10,000 USER object limit before windows throws in the towel, I'm trying to figure better ways of drawing these items onto the FlowLayoutPanel. My ideas so far are as follows: 1) User-draw the "item", using graphics.DrawText and DrawImage in place of many of the labels. I'm hoping that this will mean 1 item = 1 USER Object, not 13. 2) Have 1 instance of the "item", then for each record, populate the instance and use the Control.DrawToBitmap() method to grab an image and then use that in the FlowLayoutPanel (or similar) So... Does anyone have any other suggestions ??? P.S. It's a zoomable interface, so I have already ruled out "Paging" as there is a requirement to see all items at once :( Thanks everyone.

    Read the article

  • What is the simplest way to map a folder on the file system to a url in Tomcat?

    - by Simon
    Here's my problem... I have a small prototype app (happens to be in Grails hosted on AWS) and I want to add the ability of the user to upload a few (max 10) images. I want to persist these images on disk on the server machine, in a folder location which is outside my WAR. I realise that there is probably a super-scalable solution involving more web servers and optimised static asset serving, but for the approximately 100 users I am likely to get, it's really not worth the effort and cost. So, what is the simplest way I can have a virtual folder from my url map to a physical folder on disk? I sort of want... http://myapp.com/static to map to a folder which I can configure e.g. /var/www/static so I can then have in my code... <img src="/static/user1/picture.jpg"/> I don't particularly mind whether the resulting physical folders are directly browsable. Security will eventually be an issue, but it isn't at the start. So, what are my options? I have looked at virtual hosts on the apache site, but it feels more complicated than I need. I don't want to use the Grails static rendering plugins.

    Read the article

< Previous Page | 712 713 714 715 716 717 718 719 720 721 722 723  | Next Page >