Search Results

Search found 9596 results on 384 pages for 'remote assistance'.

Page 352/384 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Importing Excel spreadsheet data into existing Access DB

    - by Keeb13r
    I've designed an Access 2003 DB with 3 tables: APPLICATIONS, SERVERS, and INSTALLATIONS. Records in the APPLICATIONS and SERVERS tables are uniquely identified by a synthetic primary key (in Access, an "auto number"). The INSTALLATIONS table is essentially a mapping table between APPLICATIONS and SERVERS: it's a list of records of which applications are installed on which servers. A record in the INSTALLATIONS table is also identified by a synthetic primary key, and it consists of an APPLICATION_ID and SERVER_ID for the records in their respective tables. I have an Excel 2003 spreadsheet I would like to import into this database, but it's proving difficult. The spreadsheet is made up of several tabs/worksheets, each one representing a server with its own listing of installed applications. I'm not sure how to proceed with an import - the "Get External Data -- Import" feature in Access has an import "In an Existing Table" option, but it's greyed out. I'm also unsure how I build the relationships between applications and servers for importing records into the INSTALLATIONS table. I had previously fooled around with adding some security to the Access DB file. I think I removed everything but perhaps I didn't and that's causing the problem? Some sample data from the Excel spreadsheet: SERVER101 * Adobe Reader 9 * BMC Remedy User 7.0 * HostExplorer 2008 * Microsoft Office 2003 * Microsoft Office 2007 * Notepad++ SERVER102 * Adobe Reader 9 * DameWare Mini Remote Control * Microsoft Office 2003 * Microsoft .NET Framework 3.5 SP1 * Oracle 9.2 SERVER103 * AWDView * EXTRA! Personal Client 32-bit * Microsoft Office 2003 * Microsoft .NET Framework 3.5 SP1 * Snagit 9.1 * WinZip 12.1 The Access DB design is very simple: APPLICATION * APPLICATION_ID (autonumber) * APPLICATION_NAME (varchar) SERVER * SERVER_ID (autonumber) * SERVER_NAME (varchar) INSTALLATION * INSTALLATION_ID (autonumber) * APPLICATION_ID (number) * SERVER_ID (number)

    Read the article

  • TCP Socket.Connect is generating false positives

    - by Mark
    I'm experiencing really weird behavior with the Socket.Connect method in C#. I am attempting a TCP Socket.Connect to a valid IP but closed port and the method is continuing as if I have successfully connected. When I packet sniffed what was going on I saw that the app was receiving RST packets from the remote machine. Yet from the tracing that is in place it is clear that the connect method is not throwing an exception. Any ideas what might be causing this? The code that is running is basically this IPEndPoint iep = new IPEndPoint(System.Net.IPAddress.Parse(m_ipAddress), m_port); Socket tcpSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); tcpSocket.Connect(iep); To add to the mystery... when running this code in a stand alone console application, the result is as expected – the connect method throws an exception. However, when running it in the Windows Service deployment we have the connect method does not throw an exception. Edit in response to Mystere Man's answer How would the exception be swallowed? I have a Trace.WriteLine right above the .Connect method and a Trace.WriteLine right under it (not shown in the code sample for readability). I know that both traces are running. I also have a try catch around the whole thing which also does a Trace.Writeline and I don't see that in the log files anywhere. I have also enabled the internal socket tracing as you suggested. I don't see any exceptions. I see what appears to be successful connections. I am trying to identify differences between the windows service app and the diagnostic console app I made. I am running out of ideas though End edit Thanks

    Read the article

  • Rails upload file to ftp server

    - by Bob
    I'm on Rails 2.3.5 and Ruby 1.8.6 and trying to figure out how to let a user upload a file to a FTP server on a different machine than my Rails app. Also my Rails app will be hosted on Heroku which doesn't facilitate the writing of files to the local filesystem. index.html.erb <% form_tag '/ftp/upload', :method => :post, :multipart => true do %> <label for="file">File to Upload</label> <%= file_field_tag "file" %> <%= submit_tag 'Upload' %> <% end %> ftp_controller.rb require 'net/ftp' class FtpController < ApplicationController def upload file = params[:file] ftp = Net::FTP.new('remote-ftp-server') ftp.login(user = "***", passwd = "***") ftp.puttextfile(file.read, File.basename(file.original_filename)) ftp.quit() end def index end end Currently I'm just trying to get the Rails app to work on my Windows laptop. With the above code, I'm getting this error Errno::ENOENT in FtpController#upload No such file or directory -.... followed by a dump of the file contents Anyone knows what's going on?

    Read the article

  • How to design authentication in a thick client, to be fail safe?

    - by Jay
    Here's a use case: I have a desktop application (built using Eclipse RCP) which on start, pops open a dialog box with 'UserName' and 'Password' fields in it. Once the end user, inputs his UserName and Password, a server is contacted (a spring remote-servlet, with the client side being a spring httpclient: similar to the approaches here.), and authentication is performed on the server side. A few questions related to the above mentioned scenario: If said this authentication service were to go down, what would be the best way to handle further proceedings? Authentication is something that I cannot do away with. Would running the desktop client in a "limited" mode be a good idea? For instance, important features/menus/views will be disabled, rest of the application will be accessible? Should I have a back up authentication service running on a different machine, working as a backup? What are the general best-practices in this scenario? I remember reading about google gears and how it would let you edit and do stuff offline - should something like this be designed? Please let me know your design/architectural comments/suggestions. Appreciate your help.

    Read the article

  • Git: HEAD has disappeared, want to merge it into master.

    - by samgoody
    The top image is the output of: git reflog. The bottom is what GITK in GIT GUI (msysgit) shows me when I look at all branch history. The last few commits do not show on GIT GUI. Why do they not show on GITK (at least as a branch or something)? How do I merge them into master? I gather this happened when I checked out tag 0.42. Why is that not the same as master? (I had tagged the master in its latest state) When I click push, why does the remote repo claim to be up to date.. shouldn't it try to update these commits into whatever branch they are in? The first of the questions is important - I would like to begin to understand what GIT is thinking. It's more oracle than logic at this point. If it makes a difference to see the earlier history, the project is a [pretty powerful] JS color picker that can be viewed here in its entirety.

    Read the article

  • How to access a web service behind a NAT?

    - by jr
    We have a product we are deploying to some small businesses. It is basically a RESTful API over SSL using Tomcat. This is installed on the server in the small business and is accessed via an iPhone or other device portable device. So, the devices connecting to the server could come from any number of IP addresses. The problem comes with the installation. When we install this service, it seems to always become a problem when doing port forwarding so the outside world can gain access to tomcat. It seems most time the owner doesn't know router password, etc, etc. I am trying to research other ways we can accomplish this. I've come up with the following and would like to hear other thoughts on the topic. Setup a SSH tunnel from each client office to a central server. Basically the remote devices would connect to that central server on a port and that traffic would be tunneled back to Tomcat in the office. Seems kind of redundant to have SSH and then SSL, but really no other way to accomplish it since end-to-end I need SSL (from device to office). Not sure of performance implications here, but I know it would work. Would need to monitor the tunnel and bring it back up if it goes done, would need to handle SSH key exchanges, etc. Setup uPNP to try and configure the hole for me. Would likely work most of the time, but uPNP isn't guaranteed to be turned on. May be a good next step. Come up with some type of NAT transversal scheme. I'm just not familiar with these and uncertain of how they exactly work. We have access to a centralized server which is required for the authentication if that makes it any easier. What else should I be looking at to get this accomplished?

    Read the article

  • How can I call `update_attribute` for a list item in rails then actually update the html using jquery?

    - by Patrick Connor
    I want my users to be able to mark one or more items from an index view as "Active" or "Inactive" using a link. The text for the link should be state aware - so it might default to "Mark as Active" if the corresponding attribute was false or null, and "Mark as Inactive" if true. Once the user clicks the link and the attribute is updated in the controller, the link-text should update based on the new state. I am WAY off here, but this is a small sample of the code I have been trying... CONTROLLER ... respond_to :html, :js ... def update @item = Item.find(params[:id]) if @item.update_attributes(params[:item]) #Not sure of how to respond to .js here end end ... update.js.erb #how do I identify which element to update? $('#item[13456]').html("State aware text for link_to") VIEW - for item in @items = item.name = link_to "Mark as Active", item_path(item), :method => :put, :remote => true. :id => "item[#{item.id}]" I am happy to read any APIs, blogs, tutorials, etc. I just can't seem to get my hands/mind around this task. Any help or guidance is greatly appreciated!

    Read the article

  • Limit CPU usage of a process

    - by jb
    I have a service running which periodically checks a folder for a file and then processes it. (Reads it, extracts the data, stores it in sql) So I ran it on a test box and it took a little longer thaan expected. The file had 1.6 million rows, and it was still running after 6 hours (then I went home). The problem is the box it is running on is now absolutely crippled - remote desktop was timing out so I cant even get on it to stop the process, or attach a debugger to see how far through etc. It's solidly using 90%+ CPU, and all other running services or apps are suffering. The code is (from memory, may not compile): List<ItemDTO> items = new List<ItemDTO>(); using (StreamReader sr = fileInfo.OpenText()) { while (!sr.EndOfFile) { string line = sr.ReadLine() try { string s = line.Substring(0,8); double y = Double.Parse(line.Substring(8,7)); //If the item isnt already in the collection, add it. if (items.Find(delegate(ItemDTO i) { return (i.Item == s); }) == null) items.Add(new ItemDTO(s,y)); } catch { /*Crash*/ } } return items; } - So I am working on improving the code (any tips appreciated). But it still could be a slow affair, which is fine, I've no problems with it taking a long time as long as its not killing my server. So what I want from you fine people is: 1) Is my code hideously un-optimized? 2) Can I limit the amount of CPU my code block may use? Cheers all

    Read the article

  • Handling Apache Thrift list/map Return Types in C++

    - by initzero
    First off, I'll say I'm not the most competent C++ programmer, but I'm learning, and enjoying the power of Thrift. I've implemented a Thrift Service with some basic functions that return void, i32, and list. I'm using a Python client controlled by a Django web app to make RPC calls and it works pretty well. The generated code is pretty straight forward, except for list returns: namespace cpp Remote enum N_PROTO { N_TCP, N_UDP, N_ANY } service Rcon { i32 ping() i32 KillFlows() i32 RestartDispatch() i32 PrintActiveFlows() i32 PrintActiveListeners(1:i32 proto) list<string> ListAllFlows() } The generated signatures from Rcon.h: int32_t ping(); int32_t KillFlows(); int32_t RestartDispatch(); int32_t PrintActiveFlows(); int32_t PrintActiveListeners(const int32_t proto); int64_t ListenerBytesReceived(const int32_t id); void ListAllFlows(std::vector<std::string> & _return); As you see, the ListAllFlows() function generated takes a reference to a vector of strings. I guess I expect it to return a vector of strings as laid out in the .thrift description. I'm wondering if I am meant to provide the function a vector of strings to modify and then Thrift will handle returning it to my client despite the function returning void. I can find absolutely no resources or example usages of Thrift list< types in C++. Any guidance would be appreciated.

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Load javascript in app engine

    - by user624392
    I got so confused loading javascript in app engine. I am using django template. In my base html file. First I can't load my downloaded jquery from local say d:/jquery.js like <script src="d:\jquery.js" type="text/javascript" ></script></head>, This line is in my base html file. It works when I load jquery from remote. Like <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.5.1/jquery.min.js"type="text/javascript" ></script></head> I dont know why. Second, I can't load my own-created javascript to my html file, say I create a javascript like layout. Js and I try to load it like this in my child html file, which, by the way, inherits from the base html. <body><script src="layout.js" type="text/javascript"></script></body>, And it doesn't work at all, the only way it works I have tried is that I put the actual javascript in the body of my base html file. Like <body><script> $(document).ready( $("#yes"). Click(function() { $("#no"). Hide("slow"); })); </script> I dont know why either... Any help?

    Read the article

  • Sql Server Replication: Snapshot vs Merge

    - by Zyphrax
    Background information Let's say I have two database servers, both SQL Server 2008. One is in my LAN (ServerLocal), the other one is on a remote hosting environment (ServerRemote). I have created a database on ServerLocal and have an exact copy of that database on ServerRemote. The database on ServerRemote is part of a web application and I would like to keep it's data up-to-date with the data in the database ServerLocal. ServerLocal is able to communicate with ServerRemote, this is one-way traffic. Communication from ServerRemote to ServerLocal isn't available. Current solution I thought it would be a nice solution to use replication. So I've made ServerLocal a publisher and subscriptions are pushed to the ServerRemote. This works fine, when a snapshot is transfered to ServerRemote the existing data will be purged and the ServerRemote database is once again an exact replica of the database on ServerLocal. The problem Records that exist on ServerRemote that don't exist on ServerLocal are removed. This doesn't matter for most of my tables but in some of my tables I'd like to keep the existing data (aspnet_users for instance), and update the records if necessary. What kind of replication fits my problem?

    Read the article

  • What about parallelism across network using multiple PCs?

    - by MainMa
    Parallel computing is used more and more, and new framework features and shortcuts make it easier to use (for example Parallel extensions which are directly available in .NET 4). Now what about the parallelism across network? I mean, an abstraction of everything related to communications, creation of processes on remote machines, etc. Something like, in C#: NetworkParallel.ForEach(myEnumerable, () => { // Computing and/or access to web ressource or local network database here }); I understand that it is very different from the multi-core parallelism. The two most obvious differences would probably be: The fact that such parallel task will be limited to computing, without being able for example to use files stored locally (but why not a database?), or even to use local variables, because it would be rather two distinct applications than two threads of the same application, The very specific implementation, requiring not just a separate thread (which is quite easy), but spanning a process on different machines, then communicating with them over local network. Despite those differences, such parallelism is quite possible, even without speaking about distributed architecture. Do you think it will be implemented in a few years? Do you agree that it enables developers to easily develop extremely powerfull stuff with much less pain? Example: Think about a business application which extracts data from the database, transforms it, and displays statistics. Let's say this application takes ten seconds to load data, twenty seconds to transform data and ten seconds to build charts on a single machine in a company, using all the CPU, whereas ten other machines are used at 5% of CPU most of the time. In a such case, every action may be done in parallel, resulting in probably six to ten seconds for overall process instead of forty.

    Read the article

  • Configuring the GPS Intermediate Driver

    - by Zach Smith
    Hello, I have a Motorola MC75 with an integrated GPS system that I am trying to use for programming. I have done some research on it and tried setting it up by editing the remote registry of the device according to some specifications that I found on the internet on this blog, http://csharponphone.blogspot.com/2007/07/configuring-gps-intermediate-driver.html. I used the smart phone guide at the bottom of the page but to no avail. Currently, I am trying to test it with the GPS application provided in the Windows Mobile 6 SDK samples. The program will load and begin what looks like the search for satellites but will not locate anything. I do not think the GPS is set up correctly. Does anyone have any helpful insight onto this issue or a guide for configuring the GPSID? Thank you in advance. Also, I have already checked and tried some of the config help on MSDN which does not help me either. I will surely vote for the most helpful answer. Thanks, Zach

    Read the article

  • How to Implement Rich Document Editor for iPhone

    - by benjismith
    I'm just getting started on a new iPhone/iPad development project, and I need to display a document with rich styled text (potentially with embedded images). The user will touch the document, dragging to highlight individual words or multiline text spans. When the text is highlighted, a context menu will appear, letting them change the color of highlighting or add margin notes (or other various bits of structured metadata). If you're familiar with adding comments to a Word document (or annotating a PDF), then this is the same sort of thing. But in my case, the typical user will spend many many hours within the app, adding thousands (in some cases, tens of thousands) of small annotations to the central document. All of those bits of metadata will be stored locally awaiting synchronization with a remote web service. I've read other pieces of advice, where developers suggest creating a UIWebView control and passing it an HTML string. But that seems kind of clunky, especially with all the context-sensitivity that I want to include. Anyhow, I'm brand new to iPhone development and Objective-C, though I have ten years of software development experience, using a variety of languages on many different platforms, so I'm not worried about getting my hands dirty writing new functionality from scratch. But if anyone has experience building a similar kind of component, I'm interested in hearing strategies for enabling that kind of rich document markup and annotation.

    Read the article

  • pushd - handling multiple drives from cmd

    - by user673600
    I'm trying to figure out how to install some programs where the components reside on two different drives on a networked path. However whenever I use pushd \\xyz\c$ I get a mapped drive which means I cannot use any knowledge of using for example c:\install e:\mycomponents.dll. Is there anyway that I can do this once I have used the pushd command? How can I ensure that I keep the drives the same for example. I'm in the process of installing services. So it seems that when I install the service, I need to keep the path as the same as the actual location of the .exe which means that I'm running into issues. Is there a way to simply use pushd but at the sametime not actually map drives? As when installing services, when I've been using net use, I've found that there is an issue with installing on drives which are mapped, as the service whilst can be installed doesn't find the actual .exe when it comes to starting up the service. So to expand this, is there a way to solve this using net use or pushd or a combination that lets me install a service as such: c:\windows\..\installutil e:\mynode? So to clarify, I need to somehow be able to see both drives on the remote machine by their relative drives i.e. E:\ and C:\ - if I use a mapped drive letter then it means installing service is a pain because I cannot use the path.

    Read the article

  • Enterprise Platform in Python, Design Advice

    - by Jason Miesionczek
    I am starting the design of a somewhat large enterprise platform in Python, and was wondering if you guys can give me some advice as to how to organize the various components and which packages would help achieve the goals of scalability, maintainability, and reliability. The system is basically a service that collects data from various outside sources, with each outside source having its own separate application. These applications would poll a central database and get any requests that have been submitted to perform on the external source. There will be a main website and REST/SOAP API that should also have access to the central data service. My initial thought was to use Django for the web site, web service and data access layer (using its built-in ORM), and then the outside source applications can use the web service(s) to get the information they need to process the request and save the results. Using this method would allow me to have multiple instances of the service applications running on the same or different machines to balance out the load. Are there more elegant means of accomplishing this? i've heard of messaging systems such as MQ, would something like that be beneficial in this scenario? My other thought was to use a completely separate data service not based on Django, and use some kind of remoting or remote objects (in they exist in Python) to interact with the data model. The downside here would be with the website which would become much slower if it had to push all of its data requests through a second layer. I would love to hear what other developers have come up with to achieve these goals in the most flexible way possible.

    Read the article

  • Java Version of Action Delegate invokeLater

    - by ikurtz
    the issue i mentioned in this post is actually happening because of cross threading GUI issues (i hope). could you help me with Java version of action delegate please? in C# it is done as this inline: this.Invoke(new Action(delegate() {...})); how is this achived in Java? thank you. public class processChatMessage implements Observer { public void update(Observable o, Object obj) { System.out.println("class class class" + obj.getClass()); if (obj instanceof String){ String msg = (String)obj; formatChatHeader(chatHeader.Away, msg); jlStatusBar.setText("Message Received"); // Show chat form setVisibility(); } } } processChatMessage is invoked by a separate thread triggered by receiving new data from a remote node. and i think the error is being produced as it trying to update GUI controls. do you think this is the reason? i ask because im new to Java and C#, but this is what is going on i think.

    Read the article

  • Rails Heroku Gemfile.lock error - push rejected (open source)

    - by KJ50
    I am trying to push my open source RoR application to Heroku but I'm having an issue making the initial push. I have read many similar questions, but none of those answers has helped to solve my problem. I have tried bundle update and bundle install various times. I also have tried removing and then re-committing my Gemfile.lock file, however I get this same error still... $ git push heroku master Counting objects: 5199, done. Compressing objects: 100% (3086/3086), done. Writing objects: 100% (5199/5199), 4.57 MiB | 131 KiB/s, done. Total 5199 (delta 3418), reused 3152 (delta 1962) -----> Removing .DS_Store files -----> Ruby app detected -----> Compiling Ruby/NoLockfile ! ! Gemfile.lock required. Please check it in. ! ! Push rejected, failed to compile Ruby app To [email protected]:frozen-springs-4725.git ! [remote rejected] master -> master (pre-receive hook declined) error: failed to push some refs to '[email protected]:frozen-springs-4725.git' Since my application uses MongoDB with MongoMapper, I suspect that I have some configuration incorrect. My code can be found here on Github (I'm currently working on the heroku branch). Feel free to clone our repository and try it yourself. If anyone has any insight which could help me resolve this issue I would be very thankful!

    Read the article

  • Configure nHibernate for multiple-project solution

    - by NoOne
    Hello, Im doing a project with C# winforms. This project is composed by: Client project: Windows Forms where user will do CRUD operations on the models; Server project; Common Project: This project will hold the models (in the image only have the model Item); ListSingleton: Remote Object that will do the operations on the models; I already have all the communication working, but now I need to work on the persistence of the data in a mysql database. I was trying to use nHibernate but I’m having some troubles. My main problem is how to organize my hibernate configuration. In which project do I keep the mapping? Common project? In which project do I keep the hibernate configuration file (App.config)? ListSingleton project? In which project do I do this: Configuration cfg = new Configuration(); cfg.AddXmlFile("Item.hbm.xml"); ISessionFactory factory = cfg.BuildSessionFactory(); ISession session = factory.OpenSession(); ITransaction transaction = session.BeginTransaction(); Item newItem = new Item("BLAA"); // Tell NHibernate that this object should be saved session.Save(newItem); // commit all of the changes to the DB and close the ISession transaction.Commit(); session.Close(); In the ListSingleton project? Altho I had reference to the Common Project in the ListSingleton I keep getting error in the addXml line… My mapping is correct because I tried with a one-project solution and it worked :X

    Read the article

  • Ftp throws WebException only in .NET 4.0

    - by Trakhan
    I have following c# code. It runs just fine when compiled against .NET framework 3.5 or 2.0 (I did not test it against 3.0, but it will most likely work too). The problem is, that it fails when built against .NET framework 4.0. FtpWebRequest Ftp = (FtpWebRequest)FtpWebRequest.Create(Url_ + '/' + e.Name); Ftp.Method = WebRequestMethods.Ftp.UploadFile; Ftp.Credentials = new NetworkCredential(Login_, Password_); Ftp.UseBinary = true; Ftp.KeepAlive = false; Ftp.UsePassive = true; Stream S = Ftp.GetRequestStream(); byte[] Content = null; bool Continue = false; do { try { Continue = false; Content = File.ReadAllBytes(e.FullPath); } catch (Exception) { Continue = true; System.Threading.Thread.Sleep(500); } } while (Continue); S.Write(Content, 0, Content.Length); S.Close(); FtpWebResponse Resp = (FtpWebResponse)Ftp.GetResponse(); if (Resp.StatusCode != FtpStatusCode.CommandOK) Console.WriteLine(Resp.StatusDescription); The problem is in call Stream S = Ftp.GetRequestStream();, which throws an en instance of WebException with message “The remote server returned an error: (500) Syntax error, command unrecognized”. Does anybody know why it is so? PS. I communicate virtual ftp server in ECM Alfresco.

    Read the article

  • Business Tier | client state in desktop application- way around Stateful Session Beans?

    - by arthur
    I read positive inputs on the following posts concerning client state management: Stateful EJBs in web application?, here, and here. I want to know how implement such client state management for desktop applications (Swing, AWT, SWT, and other). let's assume there are n desktop clients supposed to use some (remote) services provided on an Application Server. How to maintain a separate and permanent data state for each client , distinct from the other without have to use using stateful session Beans (SFSB) ? is that even possible on this application type ? With Webapp(Servlets / JsSF and JSP) some can avoid using SFSB by HttpSession object and/or coupling it with stateless session beans (SLSB). HttpSession object would keep information for each (Web)client separate and the SLSB would play the business logic music. But HttpSession objects aren't present on desktop application and I go stuck with only SFSB and SLSB. Knowing the problems(Concurrency, Error Handling, usw ) of SFSB, I would have not wanted to use it. What would be the other options? is there only SFSB available? Thanks in advances

    Read the article

  • Cross-Application User Authentication

    - by Chris Lieb
    We have a webapp written in .NET that uses NTLM for SSO. We are writing a new webapp in Java that will tightly integrate with the original application. Unfortunately, Java has no support for performing the server portion of NTLM authentication and the only library that I can find requires too much setup to be allowed by IT. To work around this, I came up with a remote authentication scheme to work across applications and would like your opinions on it. It does not need to be extremely secure, but at the same time not easily be broken. User is authenticated into .NET application using NTLM User clicks link that leaves .NET application .NET application generates random number and stores it in the user table along with the user's full username (domain\username) Insecure token is formed as random number:username Insecure token is run through secure cipher (likely AES-256) using pre-shared key stored within the application to produce a secure token The secure token is passed as part of the query string to the Java application The Java application decrypts the secure key using the same pre-shared key stored within its own code to get the insecure token The random number and username are split apart The username is used to retrieve the user's information from the user table and the stored random number is checked against the one pulled from the insecure token If the numbers match, the username is put into the session for the user and they are now authenticated If the numbers do not match, the user is redirected to the .NET application's home page The random number is removed from the database

    Read the article

  • J2EE and alternatives

    - by Ilya K
    Hello, I am J2SE developer but I have rich web-background (php, perl/cgi and so on) and now I am starting new project. It will have web interface, spaghetti business logic, relational database as storage and connections to other services. I do it from the scratch. My colleagues told me to use spring, spring security and struts. I look briefly at J2EE spec and found that it covers almost all aspects of enterprise application. I asked my colleagues why do they need spring and struts, but looks like they use technologies simply because they are familiar with them and not familiar with classic J2EE stack. So, my question is: what is bad about J2EE? Why do I need spring if there are JNDI lookups? It will take a day or two to create fake InitialContext for unit-tests. And that is all: I stand with out of external tools like spring. Why do I need spring-security if there is a security built in Servlets spec? I can map any request to any servlet using web.xml, no struts.xml is needed. I can use servlet-filters instead of struts interceptors. There is RMI, so I do not need spring-remote. And so on.. Why should I bother my self with all that fancy stuff if there is J2EE? I really want to find situation when J2EE is not enough. Do you have any? Thanks!

    Read the article

  • How reference popup object when opener page changes?

    - by achairapart
    This is driving me crazy! Scenario: The main page opens a pop-up window and later calls a function in it: newWin = window.open(someurl,'newWin',options); ...some code later... newWin.remoteFunction(options); and it's ok. Now, popup is still open and Main Page changes to Page 2. In Page 2 newWin doesn't exist anymore and I need to recreate the popup window object reference in order to call again the remote function (newWin.remoteFunction) I tried something like: newWin = open('','newWin', options); if (!newWin || newWin.closed || !newWin.remoteFunction) { newWin = window.open(someurl,'newWin',options); } And it works, I can call newWin.remoteFunction again BUT Safari for some reason gives Focus() to the popup window everytime the open() method is called breaking the navigation (I absolutely need the popup working in the background). Only workaround I can think to solve this is to create an interval in the popup with: if(window.opener && !window.opener.newWin) window.opener.newWin = self; and then set another interval in the opener Page with some try/catch but it is inelegant and very inefficient. so, I wonder, it's really so hard to get the popup object reference between different pages in the opener window?

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >