Search Results

Search found 24387 results on 976 pages for 'ssh client'.

Page 509/976 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • How to Return Variable for all tests to use Unittest

    - by chrissygormley
    Hello, I have a Python script and I am trying to set a variable so that if the first test fail's the rest of then will be set to fail. The script I have so far is: class Tests(): def function: result function.......... def errorHandle(self): return self.error def sudsPass(self): try: result = self.client.service.GetStreamUri(self.stream, self.token) except suds.WebFault, e: assert False except Exception, e: pass finally: if 'result' in locals(): self.error = True self.errorHandle() assert True else: self.error = False self.errorHandle() assert False def sudsFail(self): try: result = self.client.service.GetStreamUri(self.stream, self.token) except suds.WebFault, e: assert False except Exception, e: pass finally: if 'result' in locals() or self.error == False: assert False else: assert True class GetStreamUri(TestGetStreamUri): def runTest(self): self.sudsPass() class GetStreamUriProtocolFail(TestGetStreamUri): def runTest(self): self.stream.Transport.Protocol = "NoValue" self.errorHandle() self.sudsFail() if __name__ == '__main__': unittest.main() I am trying to get self.error to be set to False if the first test fail. I understand that it is being set in another test but I was hoping someone could help me find a solution to this problem using some other means. Thanks PS. Please ignore the strange tests. There is a problem with the error handling at the moment.

    Read the article

  • Need Help finding an appropriate task asignment algoritm for a collage project involving coordinatin

    - by Trif Mircea
    Hello. I am a long time lurker here and have found over time many answers regarding jquery and web development topics so I decided to ask a question of my own. This time I have to create a c++ project for collage which should help manage the workflow of a company providing all kinds of services through in the field teams. The ideas I have so far are: client-server application; the server is a dispatcher where all the orders from clients get and the clients are mobile devices (PDAs) each team in the field having one a order from a client is a task. Each task is made up of a series of subtasks. You have a database with estimations on how long a task should take to complete you also know what tasks or subtasks each team on the field can perform based on what kind of specialists made up the team (not going to complicate the problem by adding needed materials, it is considered that if a member of a team can perform a subtask he has the stuff needed) Now knowing these factors, what would a good task assignment algorithm be? The criteria is: how many tasks can a team do, how many tasks they have in the queue, it could also be location, how far away are they from the place but I don't think I can implement that.. It needs to be efficient and also to adapt quickly is the human dispatcher manually assigns a task. Any help or leads would be really appreciated. Also I'm not 100% sure in the idea so if you have another way you would go about creating such an application please share, even if it just a quick outline. I have to write a theoretical part too so even if the ideas are far more complex that what i outlined that would be ok ; I'd write those and implement what I can.

    Read the article

  • How to set umask for a folder and it's subfolder?

    - by Cyril N.
    I'm working on the same directory with some friends and they access it via SSH. I added us in the same group and defined a sticky bit to keep the user:group values the same. But when a user create a file/folder, the Write attribute is not defined for the group, disabling other to write it/on it. How can I define the Umask to add the Write value for groups in the specific directory and it's subfolders ? I tried to find some help before, but I only saw helps for Fedora/CentOs, and I'm using Debian Squeeze. Thanks for your help

    Read the article

  • How To Create An FTP User That Has Permission To EVERYTHING

    - by Serg
    I've spent the last two hours trying to create an FTP user so I can transfer some files over to my Wordpress blog folder. /var/www/sergiotapia.me I'm using vsftpd and Ubuntu 12.04 for my FTP server and I've read tons of documentation, none of which seem to work. I still cannot log in with the FTP user, let alone test if I even have the read/write file permissions. Can a Linux guru here, help me out with a small step by step? I'm comfortable with the terminal and nano, so that's not an issue - I'll SSH into my box. Just tell me what to do and what commands to run. Specifically, this user needs to have read and write access to the /var/ folder and anything within it. I want to have 1 user that can do whatever the heck he wants on my Ubuntu 12.04 VPS machine.

    Read the article

  • JAX-RS --- How to return JSON and HTTP Status code together?

    - by masato-san
    I'm writing REST web app (Netbean6.9, JAX-RS, Toplink-essential) and trying to return JSON and Http status code. I have code ready and working just to return JSON when HTTP GET Method is called from client. Code snippet @Path("get/id") @GET @Produces("application/json") public M_?? getMachineToUpdate(@PathParam("id") String id) { //some code to return JSON . . return myJson But I also want to return HTTP status code (500, 200, 204 etc) along with returning JSON. I tried using HttpServletResponse object, response.sendError("error message", 500); But this made browser to think it's real 500 so output web page was regular Http 500 error page. What I want to is just to return status code so that my Javascript on client side can handle some logic depending on what HTTP status code is returned. (maybe just to display the error code and message on html page.) Is it possible to do so? or should HTTP status code not be used for such thing?

    Read the article

  • linux: selective sudo access for a particular command

    - by bguiz
    Hi, Is it possible to grant a particular user sudo access for one particular command only? Thanks -- More info: We farm out lengthy optimisation runs to each other's boxes over ssh. These runs take hours, sometimes days. The shutdown command can only be run in sudo. Being conscious of my environmental footprint, I would like to give the initiator(s) of these runs sudo access to the shutdown command on my box, without sudo access for everything else - so that they may shutdown my machine when they no longer need it. I am aware that I can schedule a shutdown before I leave my box, but I am looking for a better solution.

    Read the article

  • Forms authentication: disable redirect to the login page

    - by codeka
    I have an application that uses ASP.NET Forms Authentication. For the most part, it's working great, but I'm trying to add support for a simple API via an .ashx file. I want the ashx file to have optional authentication (i.e. if you don't supply an Authentication header, then it just works anonymously). But, depending on what you do, I want to require authentication under certain conditions. I thought it would be a simple matter of responding with status code 401 if the required authentication was not supplied, but it seems like the Forms Authentcation module is intercepting that and responding with a redirect to the login page instead. What I mean is, if my ProcessRequest method looks like this: public void ProcessRequest(HttpContext context) { Response.StatusCode = 401; Response.StatusDescription = "Authentication required"; } Then instead of getting a 401 error code on the client, like I expect, I'm actually getting a 302 redirect to the login page. For nornal HTTP traffic, I can see how that would be useful, but for my API page, I want the 401 to go through unmodified so that the client-side caller can respond to it programmatically instead. Is there any way to do that?

    Read the article

  • Serialization of Mouse cursors over network

    - by Ehtsham
    hi I am working a client/server application in C#. My server Capture current Mouse Cursors and send these to client so that Cursor of the cleint also chage accordingly.I can detect windows Cursors and serialize them over binaryformatter. it works fine but but problem is there are many cursors that can not be detected like mspaint cursors so i have to take its handler and create the cursor and its x nad y hotspots and add them in an arraylist and serialize it over network but after 10 to 15 minute it thorws exception "Error HRESULT E_FAIL has been returned from a call to a COM Compeonet" and cleint throws the exception of "method of invocation" Can anybody guid me what going wrong ort some better way to do like this Some code is here IntPtr curInfo = GetCurrentCursor(); Cursor cur; Icon ic; byte cursor = 0; if (curInfo != null && curInfo.ToInt32() != 0) { cur = CheckForCusrors(curInfo); try { if (!isLinuxClient) { if (cur == null) { PlatformInvokeUSER32.GetIconInfo(curInfo, out temp); ic = Icon.FromHandle(curInfo); //bitmap = ic.ToBitmap(); ArrayList ar = new ArrayList(); ar.Add(ic); ar.Add(temp.xHotspot); ar.Add(temp.yHotspot); b.Serialize(stm, ar); } else { ArrayList ar = new ArrayList(); ar.Add(cur); b.Serialize(stm, ar); } } public Cursor CheckForCusrors(IntPtr hCur) { if (hCur == Cursors.AppStarting.Handle) return Cursors.AppStarting; else if (hCur == Cursors.Arrow.Handle) return Cursors.Arrow; . . . else if (hCur == Cursors.PanWest.Handle) return Cursors.PanWest; return null; } `

    Read the article

  • Red Hat 5.4 slow processing

    - by yucefrizk
    I'm running Red Hat Linux 5.4 on HP DL580 server with 16 processors and 64 GB of RAM. I'm connecting to the server remotely through SSH. after entering the password, it takes time to return the command line, if I click ctrl+c during this time, I'll have the command line prompt but not the correct bash prompt (I have to run bash to pass to my correct prompt). I tried to install Apache on the server, ./configure took 4 hours to finish instead of 1 or two minutes, Oracle installation same behavior. Server Disks are mirrored using RAID controller. any idea what could be the reason of this slowness?

    Read the article

  • Amazone API ItemSearch returns (400) Bad Request.

    - by BuzzBubba
    I'm using a simple example from Amazon documentation for ItemSearch and I get a strange error: "The remote server returned an unexpected response: (400) Bad Request." This is the code: public static void Main() { //Remember to create an instance of the amazon service, including you Access ID. AWSECommerceServicePortTypeClient service = new AWSECommerceServicePortTypeClient(new BasicHttpBinding(), new EndpointAddress( "http://webservices.amazon.com/onca/soap?Service=AWSECommerceService")); AWSECommerceServicePortTypeClient client = new AWSECommerceServicePortTypeClient( new BasicHttpBinding(), new EndpointAddress("http://webservices.amazon.com/onca/soap?Service=AWSECommerceService")); // prepare an ItemSearch request ItemSearchRequest request = new ItemSearchRequest(); request.SearchIndex = "Books"; request.Title = "Harry+Potter"; request.ResponseGroup = new string[] { "Small" }; ItemSearch itemSearch = new ItemSearch(); itemSearch.Request = new ItemSearchRequest[] { request }; itemSearch.AWSAccessKeyId = accessKeyId; // issue the ItemSearch request try { ItemSearchResponse response = client.ItemSearch(itemSearch); // write out the results foreach (var item in response.Items[0].Item) { Console.WriteLine(item.ItemAttributes.Title); } } catch(Exception e) { Console.ForegroundColor = ConsoleColor.Red; Console.WriteLine(e.Message); Console.ForegroundColor = ConsoleColor.White; Console.WriteLine("Press any key to quit..."); Clipboard.SetText(e.Message); } Console.ReadKey(); What is wrong?

    Read the article

  • Logging upload attempt with proftpd

    - by Amit Sonnenschein
    I have a logging server that i use with external hardware, the idea is that a special hardware is uploading logs about it's operation every few hours and from the server i can do whatever i need to do with the information, the old server was getting a bit too old and i've moved to a new one, i've install lamp,proftpd and ssh (just the same as i had on the old server). now for some reason the logs are not being uploaded and i don't know why. the hardware uses a direct ftp access - i've the proftpd.log and saw that the connection is not being rejected (just to make sure i didn't make a mistake with the user/pass) my problem is that for some reason the upload itself is failing... it might be due to wrong path (as it's hard coded in the hardware) but i can't really know as proftpd wont give me any details.. i've tried to change the loglevel to "debug" thinking it would give me more information but i don't see any change... is there any other way i can make sure proftpd logs EVERTHING ?

    Read the article

  • PHP static function self:: in joomla JFactory class explanation?

    - by Carbon6
    Hi I'm looking at the code of Joomla and trying to figure out what exactly happends in this function. index.php makes a call to function $app = JFactory::getApplication('site'); jfactory.php code public static function getApplication($id = null, $config = array(), $prefix='J') { if (!self::$application) { jimport('joomla.application.application'); self::$application = JApplication::getInstance($id, $config, $prefix); } return self::$application; } application.php code.. public static function getInstance($client, $config = array(), $prefix = 'J') { static $instances; if (!isset($instances)) { $instances = array(); } ....... more code ........ return $instances[$client]; } Now I cannot figure out in function getApplication why is self:$application used. self::$application = JApplication::getInstance($id, $config, $prefix); $application is always null, what is the purpose of using this approach. I tryied modifying it to $var = JApplication::getInstance($id, $config, $prefix); and returnig it but it doesn't work. I would be very glad if someone with more knowledge could explain what is happening here detailed as possible. Many thanks.

    Read the article

  • Ethernet port sleeping on PS3 running linux

    - by Doug
    My lab has a PS3 running Ubuntu Linux 9.04 Server Edition. After a period of a few hours with no use, the Ethernet connection (eth0) seems to go to sleep, causing the connection to be lost. Pinging or trying to SSH into the machine results in no response. The fix I've been using is to access the machine locally and restart it (trying to bring eth0 down then up doesn't seem to correct it). I've tried setting up an hourly cron job that runs on the PS3 and pings another machine just to create network activity, but this doesn't seem to solve the problem either. Update: The solution was to run the above cron job much more frequently: every 10 minutes works.

    Read the article

  • Http authentication with apache httpcomponents

    - by matdan
    Hi, I am trying to develop a java http client with apache httpcomponents 4.0.1. This client calls the page "https://myHost/myPage". This page is protected on the server by a JNDIRealm with a login form authentication, so when I try to get https://myHost/myPage I get a login page. I tried to bypass it unsuccessfully with the following code : //I set my proxy HttpHost proxy = new HttpHost("myProxyHost", myProxyPort); //I add supported schemes SchemeRegistry supportedSchemes = new SchemeRegistry(); supportedSchemes.register(new Scheme("http", PlainSocketFactory .getSocketFactory(), 80)); supportedSchemes.register(new Scheme("https", SSLSocketFactory .getSocketFactory(), 443)); // prepare parameters HttpParams params = new BasicHttpParams(); HttpProtocolParams.setVersion(params, HttpVersion.HTTP_1_1); HttpProtocolParams.setContentCharset(params, "UTF-8"); HttpProtocolParams.setUseExpectContinue(params, true); ClientConnectionManager ccm = new ThreadSafeClientConnManager(params, supportedSchemes); DefaultHttpClient httpclient = new DefaultHttpClient(ccm, params); httpclient.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY, proxy); //I add my authentication information httpclient.getCredentialsProvider().setCredentials( new AuthScope("myHost/myPage", 443), new UsernamePasswordCredentials("username", "password")); HttpHost host = new HttpHost("myHost", 443, "https"); HttpGet req = new HttpGet("/myPage"); //show the page ResponseHandler<String> responseHandler = new BasicResponseHandler(); String rsp = httpClient.execute(host, req, responseHandler); System.out.println(rsp); When I run this code, I always get the login page, not myPage. How can I apply my credential parameters to avoid this login form? Any help would be fantastic

    Read the article

  • AppFabric caching's local cache isnt working for us... What are we doing wrong?

    - by Olly
    We are using appfabric as the 2ndlevel cache for an NHibernate asp.net application comprising a customer facing website and an admin website. They are both connected to the same cache so when admin updates something, the customer facing site is updated. It seems to be working OK - we have a CacheCLuster on a seperate server and all is well but we want to enable localcache to get better performance, however, it dosnt seem to be working. We have enabled it like this... bool UseLocalCache = int LocalCacheObjectCount = int.MaxValue; TimeSpan LocalCacheDefaultTimeout = TimeSpan.FromMinutes(3); DataCacheLocalCacheInvalidationPolicy LocalCacheInvalidationPolicy = DataCacheLocalCacheInvalidationPolicy.TimeoutBased; if (UseLocalCache) { configuration.LocalCacheProperties = new DataCacheLocalCacheProperties( LocalCacheObjectCount, LocalCacheDefaultTimeout, LocalCacheInvalidationPolicy ); // configuration.NotificationProperties = new DataCacheNotificationProperties(500, TimeSpan.FromSeconds(300)); } Initially we tried using a timeout invalidation policy (3mins) and our app felt like it was running faster. HOWEVER, we noticed that if we changed something in the admin site, it was immediatley updated in the live site. As we are using timeouts not notifications, this demonstrates that the local cache isnt being queried (or is, but is always missing). The cache.GetType().Name returns "LocalCache" - so the factory has made a local cache. Running "Get-Cache-Statistics MyCache" in PS on my dev environment (asp.net app running local from vs2008, cache cluster running on a seperate w2k8 machine) show a handful of Request Counts. However, on the Production environment, the Request Count increases dramaticaly. We tried following the method here to se the cache cliebt-server traffic... http://blogs.msdn.com/b/appfabriccat/archive/2010/09/20/appfabric-cache-peeking-into-client-amp-server-wcf-communication.aspx but the log file had nothing but the initial header in it - i.e no loggin either. I cant find anything in SO or Google. Have we done something wrong? Have we got a screwy install of AppFabric - we installed it via WebPlatform Installer - I think? (note: the IIS box running ASp.net isnt in yhe cluster - it is just the client). Any insights greatfully received!

    Read the article

  • Using Credentials with network scanners

    - by grossmae
    I'm testing out both Tenable's Nessus scanner as well as eEye's Retina for scanning network devices. I am trying to supply credentials to get deeper, more accurate results, however there seems to be no difference in the results whether I supply the credentials or not. I've read the documentation and it seems like I've tried all the logical settings in the Credential options. I've submit along with usernames and passwords for many different accounts and types of accounts (both SSH Credentials and Web Application Credentials) on the devices as well as their respective domain names (when applicable). Is there possibly a good test for either (or both) scanners to tell where these credentials are being provided (if at all) and if any of them are successfully getting authentication?

    Read the article

  • Synchronizing one or more databases with a master database - Foreign keys

    - by Ikke
    I'm using Google Gears to be able to use an application offline (I know Gears is deprecated). The problem I am facing is the synchronization with the database on the server. The specific problem is the primary keys or more exactly, the foreign keys. When sending the information to the server, I could easily ignore the primary keys, and generate new ones. But then how would I know what the relations are. I had one sollution in mind, bet the I would need to save all the pk for every client. What is the best way to synchronize multiple client with one server db. Edit: I've been thinking about it, and I guess seqential primary keys are not the best solution, but what other possibilities are there? Time based doesn't seem right because of collisions which could happen. A GUID comes to mind, is that an option? It looks like generating a GUID in javascript is not that easy. I can do something with natural keys or composite keys. As I'm thinking about it, that looks like the best solution. Can I expect any problems with that?

    Read the article

  • virtualbox and nginx server_name

    - by Ivan
    I'm trying to configure gitlab running in an Ubuntu 12.04 guest with Windows7 host. I can ssh the guest using port-forwarding and access the nginx server using port redirection (8888 in host is 80 in guest, so localhost:8888 in host gets to the nginx server in the guest), but the server_name in nginx configuration file is giving me trouble. What is the correct listen and server_name that nginx would accept? The guest has the NAT interface at 10.0.2.15 and Host-Only interface at 192.168.56.101, static. Thanks!

    Read the article

  • Converting an ancient RH8 system to VMware ESXi

    - by donatello
    I am curious to know what options I have to convert a very old RedHat8 machine to a virtual one on ESXi. Looking at VMware Converter it seems there's an option to login to the RH8 using SSH, and from there it will convert to the ESXi-server. That makes me a bit nervous though, exactly what is happening there? The RH8 machine is slightly critical, and if anything messes up it'll likely result in many hours extra work. :( Another option I thought of was to boot a LiveCD on RH8-system and create a raw "dd dump" of the disk. The similar method is used to restore the image, I boot a LiveCD on the VM in ESXi and use "dd" to write it to disk. Is there any other option I could use? I'm using the cheap version of ESXi, hence I have no access to the Converter BootCD so these rather cumbersome methods is the only I can think of. :)

    Read the article

  • How to properly load HTML data from third party website using MVC+AJAX?

    - by Dmitry
    I'm building ASP.NET MVC2 website that lets users store and analyze data about goods found on various online trade sites. When user is filling a form to create or edit an item, he should have a button "Import data" that automatically fills some fields based on data from third party website. The question is: what should this button do under the hood? I see at least 2 possible solutions. First. Do the import on client side using AJAX+jQuery load method. I tried it in IE8 and received browser warning popup about insecure script actions. Of course, it is completely unacceptable. Second. Add method ImportData(string URL) to ItemController class. It is called via AJAX, does the import + data processing server-side and returns JSON-d result to client. I tried it and received server exception (503) Server unavailable when loading HTML data into XMLDocument. Also I have a feeling that dealing with not well-formed HTML (missing closing tags, etc.) will be a huge pain. Any ideas how to parse such HTML documents?

    Read the article

  • How to deny access to disabled AD accounts via kerberos in pam_krb5?

    - by Phil
    I have a working AD/Linux/LDAP/KRB5 directory and authentication setup, with one small problem. When an account is disabled, SSH publickey authentication still allows user login. It's clear that kerberos clients can identify a disabled account, as kinit and kpasswd return "Clients credentials have been revoked" with no further password / interaction. Can PAM be configured (with "UsePAM yes" in sshd_config) to disallow logins for disabled accounts, where authentication is done by publickey? This doesn't seem to work: account [default=bad success=ok user_unknown=ignore] pam_krb5.so Please don't introduce winbind in your answer - we don't use it.

    Read the article

  • Tunneling HTTPS traffic via a PUTTY/SSL tunnel with SOCKS

    - by ripper234
    I have configured a SOCKS ssh tunnel to a remote proxy, and set my Firefox to use localhost:<port> as a SOCKS proxy. My intention is to tunnel outgoing HTTP/S connections from my machine via a specific 3rd party server I own (on AWS). In my testing, HTTP UTLs are forwarded properly (e.g. when I access http://jsonip.com/ from my computer I do get the server's IP) However, whenever I try to reach an HTTPS address, I get this error: The proxy server is refusing connections How do I debug/fix it? My PUTTY tunnel config is simply (some random source port number + dynamic checked): P.S. I'm aware I might need to manually accept SSL certificates. The reason I'm doing this is to resolve problems using gmail as an outbound SMTP service.

    Read the article

  • Determine process using a port, without sudo

    - by pat
    I'd like to find out which process (in particular, the process id) is using a given port. The one catch is, I don't want to use sudo, nor am I logged in as root. The processes I want this to work for are run by the same user that I want to find the process id - so I would have thought this was simple. Both lsof and netstat won't tell me the process id unless I run them using sudo - they will tell me that the port is being used though. As some extra context - I have various apps all connecting via SSH to a server I manage, and creating reverse port forwards. Once those are set up, my server does some processing using the forwarded port, and then the connection can be killed. If I can map specific ports (each app has their own) to processes, this is a simple script. Any suggestions? This is on an Ubuntu box, by the way - but I'm guessing any solution will be standard across most Linux distros.

    Read the article

  • Download dynamic file with GWT

    - by Maksim
    I have a GWT page where user enter data (start date, end date, etc.), then this data goes to the server via RPC call. On the server I want to generate Excel report with POI and let user save that file on their local machine. This is my test code to stream file back to the client but for some reason I think it does not know how to stream file to the client when I'm using RPC: public class ReportsServiceImpl extends RemoteServiceServlet implements ReportsService { public String myMethod(String s) { File f = new File("/excelTestFile.xls"); String filename = f.getName(); int length = 0; try { HttpServletResponse resp = getThreadLocalResponse(); ServletOutputStream op = resp.getOutputStream(); ServletContext context = getServletConfig().getServletContext(); resp.setContentType("application/octet-stream"); resp.setContentLength((int) f.length()); resp.setHeader("Content-Disposition", "attachment; filename*=\"utf-8''" + filename + ""); byte[] bbuf = new byte[1024]; DataInputStream in = new DataInputStream(new FileInputStream(f)); while ((in != null) && ((length = in.read(bbuf)) != -1)) { op.write(bbuf, 0, length); } in.close(); op.flush(); op.close(); } catch (Exception ex) { ex.printStackTrace(); } return "Server says: " + filename; } } I've red somewhere on internet that you can't do file stream with RPC and I have to use Servlet for that. Is there any example of how to use Servlet and how to call that servlet from ReportsServiceImpl. Do I really need to make a servlet or it is possible to stream it back with my RPC?

    Read the article

  • Have Ubuntu 9.10 desktop, just got Macbook Pro. Share over Samba, NFS, other?

    - by miamisoftware
    Hi everyone. As the title says, I have and love my Ubuntu 9.10 desktop (use it for programming). Just got a Macbook Pro (Snow Leopard) and stuff like Documents, etc, trying to figure out easiest way to share my Ubuntu desktop with my Macbook Pro. Should I use Samba or NFS and is it easy to configure one (or something else) for only in network access (192.168.1.x). It took me about 2 days to find/setup Macfuse and Macfusion for sshfs to the Fedora web server and I'm hoping there's something much easier for this in network access. But if it requires or is suggested I go ssh, I can do that. Are there any security problems with either Samba or NFS - don't know much about AFP-Apple protocol so I've not brought it up. Thanks in advance.

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >