Search Results

Search found 52798 results on 2112 pages for 'sharepoint web services'.

Page 687/2112 | < Previous Page | 683 684 685 686 687 688 689 690 691 692 693 694  | Next Page >

  • How to check whether user is login in web application?

    - by Morgan Cheng
    I want to learn the whole details of web application authentication. So, I decided to write a CodeIgniter authentication library from scratch. Now, I have to make design decision about how to determine whether one user is login. Basically, after user input username & password pair. A cookie is set for this session, following navigations in the web application will not require username & password. The server side will check whether the session cookie is valid to determine whether current user is login. The question is: how to determine whether cookie is valid cookie issued from server side? I can image the most simple way is to have the cookie value stored in session status as well. For each HTTP request, compare the value from cookie and the value from server session. (Since CodeIgniter session library store session variables in cookies, it is not applicable without some tweak.) This method requires storage in server side. For huge web application that is deployed in multiple datacenters. It is possible that user input username & password when browsing in one datacenter, while he/she access the web application in another datacenter later. The expected behavior is that user just input username & password once. As a result, all datacenters should be able to access the session status. That is possible not applicable even the session status is stored in external storage such as database. I tried Google. I login Google with Asian proxy which is supposed to direct me to datacenters in Asian. Then I switch to North American proxy which should direct me to datacenters in North America. It recognize my login without asking username and password again. So, is there any way to determine whether user is login without server side session status?

    Read the article

  • Battery drains even with app off screen, could it be Location Services doing it?

    - by John Jorsett
    I run my app, which uses GPS and Bluetooth, then hit the back button so it goes off screen. I verified via LogCat that the app's onDestroy was called. OnDestroy removes the location listeners and shuts down my app's Bluetooth service. I look at the phone 8 hours later and half the battery charge has been consumed, and my app was responsible according the phone's Battery Use screen. If I use the phone's Settings menu to Force Stop the app, this doesn't occur. So my question is: do I need to do something more than remove the listeners to stop Location Services from consuming power? That's the only thing I can think of that would be draining the battery to that degree when the app is supposedly dormant. Here's my onStart() where I turn on the location-related stuff and Bluetooth: @Override public void onStart() { super.onStart(); if(D_GEN) Log.d(TAG, "MainActivity onStart, adding location listeners"); // If BT is not on, request that it be enabled. // setupBluetooth() will then be called during onActivityResult if (!mBluetoothAdapter.isEnabled()) { Intent enableIntent = new Intent(BluetoothAdapter.ACTION_REQUEST_ENABLE); startActivityForResult(enableIntent, REQUEST_ENABLE_BT); // Otherwise, setup the Bluetooth session } else { if (mBluetoothService == null) setupBluetooth(); } // Define listeners that respond to location updates mLocationManager = (LocationManager) this.getSystemService(Context.LOCATION_SERVICE); mLocationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, GPS_UPDATE_INTERVAL, 0, this); mLocationManager.addGpsStatusListener(this); mLocationManager.addNmeaListener(this); } And here's my onDestroy() where I remove them: public void onDestroy() { super.onDestroy(); if(D_GEN) Log.d(TAG, "MainActivity onDestroy, removing update listeners"); // Remove the location updates if(mLocationManager != null) { mLocationManager.removeUpdates(this); mLocationManager.removeGpsStatusListener(this); mLocationManager.removeNmeaListener(this); } if(D_GEN) Log.d(TAG, "MainActivity onDestroy, finished removing update listeners"); if(D_GEN) Log.d(TAG, "MainActivity onDestroy, stopping Bluetooth"); stopBluetooth(); if(D_GEN) Log.d(TAG, "MainActivity onDestroy finished"); }

    Read the article

  • Web service failing when installed in seperate project from the website...

    - by Adam
    I am totally new to web services and cannot get mine to work. My setup is on VS 2008 using IIS. I have one solution file with 3 projects in it: website, code, and services. If I put my webservice into my website and call it locally then it will work fine (it's just a hello world web service). I want to put the service into a different location for use from multiple sites. I don't know what I'm doing wrong - i've read so much conflicting info regarding disco files, access files, silverlight, flash, java, etc. I just looking for quick simple steps to create a web service that I can access from javascript and deploy to a seperate website. End goal is to have functionality in webservices so that website will call via JS and run much smoother in the loading time and async calls. Do I need to create a disco file? Do I need to configure security? -- I know this is prob best, i'm just looking to get it working at all. Do I need to allow cross browser access on IIS or on my hosted server? Are there any quick reference websites that you can recommend? Should I be using WCF as new technology? - I saw this on MSDN but seems to be more for windows apps then web apps. I'm not getting any specific error codes. I have installed the firefox debugging tools (firebug) and I can see what the headers are but I don't know how to interpret them and there is no response being passed back. Any help is appreciated!!!

    Read the article

  • Can any web-based email program let user paste an image (not link) as part of the email instead of a

    - by Jian Lin
    I think back in the old days, some email programs let users Paste an image right inside the email content. (maybe WinMail or Outlook?) So for example, we can write some text, embed an image, and then write some more text, and embed another image. The recipient will get the email content in the above text-image-text-image order (instead of having all images attached at the end of email) Can any web-based email program do that? Note that the image is not just a link, but a complete file. For example, Gmail lets us paste an image, but it is actually just a link to some web location. Can the actual file content be embedded instead?

    Read the article

  • Windows Ce 6.0 loses Windows credentials when viewing a web site that's running on Windows 2008 server

    - by gnomixa
    When a user views a web page (with integrated Windows authentication) on WindowsCE 6.0 device, the authentication is lost sporadically. The page being viewed is running on Windows 2008 server. We never had the same issue with Windows 2003 server. The credentials were being asked once and cached for a certain time. My question is: has anything changed in Windows 2008 that doesn't pass the credentials the same way to WindowsCE? The only variable in this scenario is the web server OS - Windows 2003 vs WIndows 2008. Any help would be appreciated, thanks!

    Read the article

  • multiple webapps in tomcat -- what is the optimal architecture?

    - by rvdb
    I am maintaining a growing base of mainly Cocoon-2.1-based web applications [http://cocoon.apache.org/2.1/], deployed in a Tomcat servlet container [http://tomcat.apache.org/], and proxied with an Apache http server [http://httpd.apache.org/docs/2.2/]. I am conceptually struggling with the best way to deploy multiple web applications in Tomcat. Since I'm not a Java programmer and we don't have any sysadmin staff I have to figure out myself what is the most sensible way to do this. My setup has evolved through 2 scenarios and I'm considering a third for maximal separation of the distinct webapps. [1] 1 Tomcat instance, 1 Cocoon instance, multiple webapps -tomcat |_ webapps |_ webapp1 |_ webapp2 |_ webapp[n] |_ WEB-INF (with Cocoon libs) This was my first approach: just drop all web applications inside a single Cocoon webapps folder inside a single Tomcat container. This seemed to run fine, I did not encounter any memory issues. However, this poses a maintainability drawback, as some Cocoon components are subject to updates, which often affect the webapp coding. Hence, updating Cocoon becomes unwieldy: since all webapps share the same pool of Cocoon components, updating one of them would require the code in all web applications to be updated simultaneously. In order to isolate the web applications, I moved to the second scenario. [2] 1 Tomcat instance, each webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp[n] |_ WEB-INF (with Cocoon libs) This approach separates all webapps into their own Cocoon environment, run inside a single Tomcat container. In theory, this works fine: all webapps can be updated independently. However, this soon results in PermGenSpace errors. It seemed that I could manage the problem by increasing memory allocation for Tomcat, but I realise this isn't a structural solution, and that overloading a single Tomcat in this way is prone to future memory errors. This set me thinking about the third scenario. [3] multiple Tomcat instances, each with a single webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp2 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp[n] |_ WEB-INF (with Cocoon libs) I haven't tried this approach, but am thinking of the $CATALINA_BASE variable. A single Tomcat distribution can be multiply instanciated with different $CATALINA_BASE environments, each pointing to a Cocoon instance with its own webapp. I wonder whether such an approach could avoid the structural memory-related problems of approach [2], or will the same issues apply? On the other hand, this approach would complicate management of the Apache http frontend, as it will require the AJP connectors of the different Tomcat instances to be listening at different ports. Hence, Apache's worker configuration has to be updated and reloaded whenever a new webapp (in its own Tomcat instance) is added. And there seems no way to reload worker.properties without restarting the entire Apache http server. Is there perhaps another / more dynamic way of 'modularizing' multiple Tomcat-served webapps, or can one of these scenarios be refined? Any thoughts, suggestions, advice much appreciated. Ron

    Read the article

  • Zabbix Server with Multiple NIC (one on different VLAN) - Monitor a host from both NIC?

    - by Joshua Enfield
    Basically we have many of servers configured for internal use only. I want to ensure the internal services are preserved as internal by checking a host using the local subnet (allowed - this checks if services are up and working), and that the internal services are indeed internal (make sure the services are "down" when checking from different subnet (vlan)) Is there an easy way to do this in Zabbix?

    Read the article

  • What are the benefits of full VDI over Remote Desktop Services?

    - by Doug Chase
    We're talking about piloting VDI here, but the more research I do, the more it seems like it would make more sense just to upgrade and expand our TS (RDS) environment. I feel like you can pull off more sessions per core on RDS than on any VDI solution I've looked at. Is this the case? Is there a decision matrix anywhere describing the benefits of using full virtualized desktops over using a remote desktop farm? We need good video performance for clinical imaging - will this work better on one infrastructure or the other? (Does this question have a specific enough answer for it to be on SF? Regardless, I feel like having this here will be helpful for someone in the future...)

    Read the article

  • Web browser being selective about the sites that it will visit.

    - by Andrew Doran
    I've been trying to help my father-in-law with this problem but haven't been able to get anywhere. Since the weekend the web browsers on his computer (Chrome and Internet Explorer on Windows XP) will only let him get to certain sites - for example, he is able to conduct his online banking but he cannot visit www.bbc.co.uk, www.amazon.co.uk or www.ancestry.com. There is another computer in the house that goes via the same router and this can connect to both, which suggests it is his machine. I tried running a tracert to www.bbc.co.uk and managed to get through, but the web browser hangs with a message that it is waiting for a response. I tried using the WinSockFix tool in case it was anything to do with a recent registry change but that didn't work either. He can't think of anything that he recently did on his machine to cause the problem. Can anyone help?

    Read the article

  • Which browser does my computer use to open a Web page? [on hold]

    - by msh210
    I know little about networking the Internet, but, from what I understand, it works — very approximately — as follows: I, sitting at the computer example.com, send a message saying, roughly, "get http://s.tk" to my ISP, which passes the message along, eventually to the machine at s.tk. The s.tk machine gets "example.comhas sent 'gethttp://s.tk'", so sendssomefileto its ISP which passes the file along, eventually to the machine atexample.com`. When the file gets back to example.com, my computer, how does my computer know what to do with it? I'm sure the headers (or something else) indicate it's a Web page rather than, say, a Usenet post — that's not my question. My question is: how does it know whether to display the Web page in my open Opera window or my open Firefox window, or my other open Firefox window, or, heck, to open a new browser instance?

    Read the article

  • How much should I charge for setting up a client web server? [closed]

    - by Vincent
    I have quite the eccentric client who wants to move all his ecommerce websites (about 5-10, all PHP/MySQL) to his own web server that I'm supposed to build. He doesn't want to hear anything about VPS hosting and all the issues and expenses related to owning a server. My responsibility would be to buy all the hardware, install and configure software, etc. How much should I charge for this? I'm planning to start with two relatively moderate Dell PowerEdge C2100 servers, one for web (NGINX), one for db (MySQL).

    Read the article

  • Existing connexion on Apache and mod_proxy_balancer don't fail over second JBoss node

    - by Jean-Rémy Revy
    I have a Jboss farm, load balanced by Apache HTTP + mod_proxy_balancer and mod_proxy_ajp, with the following configuration : <VirtualHost *:80> ServerName web-gui-acceptance.myorg.com ServerAlias web-gui-acceptance ProxyRequests Off ProxyPass /web-gui balancer://jbosscluster/web-gui stickysession=JSESSIONID nofailover=On ProxyPassReverse /web-gui http://srvlnx01.myorg.com:8080/web-gui ProxyPassReverse /web-gui http://srvlnx02.myorg.com:8080/web-gui <Proxy *> AuthType Kerberos [...] </Proxy> <Proxy balancer://jbosscluster> BalancerMember ajp://srvlnx01.myorg.com:8009 route=SRVLNX01_node1 BalancerMember ajp://srvlnx01.myorg.com:8009 route=SRVLNX02_node1 ProxySet lbmethod=byrequests </Proxy> </VirtualHost> When the first JBoss node fail (the hosting VM is down), my existing connexions don't fail over the second node ... the fist route is keeped (in table / .shm ?) and that provide me 503 errors. Can someone tell me what I missed ?

    Read the article

  • How to make other computers connected to my workgroup able to browse my web server via my private local domain www.mydomain.local?

    - by Motivated Student
    Background I have a broadband connection. The fiber optic cable running from outside to my building is connected to a converter unit. The output of the converter is connected to a router. The router also provides 8 LAN ports to which the computers are connected. Shortly speaking, all computers are interconnected in a workgroup as opposed to a domain. My computer hosts an IIS web server using a private or local domain name www.mydomain.local. I has added 127.0.0.1 www.mydomain.local to the hosts file in the server C:\Windows\System32\drivers\etc so I can browse the site from within the server. So far so good. How to make other computers connected to my workgroup able to browse my web server via my private local domain www.mydomain.local? Note: Again, I am a newbie. I have no idea what I should do next.

    Read the article

  • How can I tell SELinux to give vsftpd write access in a specific directory?

    - by Arcturus
    Hello. I've set up vsftpd on my Fedora 12 server, and I'd like to have the following configuration. Each user should have access to: his home directory (/home/USER); the web directory I created for him (/web/USER). To achieve this, I first configured vsftpd to chroot each user to his home directory. Then, I created /web/USER with the correct permissions, and used mount --bind /web/USER /home/USER/Web so that the user may have access to /web/USER through /home/USER/Web. I also turned on the SELinux boolean ftp_home_dir so that vsftpd is allowed to write in users' home directories. This works very well, except that when a user tries to upload or rename a file in /home/USER/Web, SELinux forbids it because the change must also be done to /web/USER, and SELinux doesn't give vsftpd permission to write anything to that directory. I know that I could solve the problem by turning on the SELinux boolean allow_ftpd_full_access, or ftpd_disable_trans. I also tried to use audit2allow to generate a policy, but what it does is generate a policy that gives ftpd write access to directories of type public_content_t; this is equivalent to turning on allow_ftpd_full_access, if I understood it correctly. I'd like to know if it's possible to configure SELinux to allow FTP write access to the specific directory /web/USER and its contents, instead of disabling SELinux's FTP controls entirely.

    Read the article

  • Email continuity services i.e. Messagelabs - Caveats, lessons learned, gotchas?

    - by molecule
    Hi all, I am in the process of reviewing some email continuity solutions such as the one offered by Messagelabs. Solutions such as this are not cheap, however, I believe they reduce complexity when it comes to administration and serves as a feasible DR type solution for emails as opposed to purchasing a new server for DR purposes. Have any of you had first hand experience using this service and what are your opinions and/or feedback? Thanks in advance.

    Read the article

  • How to use iptables to foward outbound web traffic to a proxy?

    - by jnman
    I've been hitting my head for a while as to how to do this. The scenario is as follows: I want to be able to forward all outbound web traffic from a browswer to Tor so that it is properly anonymized. Normally, one could just set the http proxy in the browser and be done with it but this is with a browser without the ability to do so specifically, a mobile browser. So ideally, what could be done then is to intercept all web/dns traffic requests from the browser and send it to Tor. Assume for this, that Tor will be running on the device too.

    Read the article

  • Web App Server hardware question. Which configuration?

    - by JBeckton
    I am pricing some new servers and I am not sure which configuration to get. The server will be running several web applications for our company. Some of them are ASP.Net sites and some are ColdFusion. The OS will be Win Server 2008 Web or Standard Edition. Do I need 2 processors or will a single quad core handle it? Xeon multi core Hyperthreading or non Hyperthreading? I am going 64bit so I can go higher than 4 Gigs of Ram. I am shopping at Dell and there are so many options, I want to get the most bang for my buck without going over budget and I also don't want the machine to be mostly under utilized.

    Read the article

  • Weird unexpected image compression on a web server running Apache on Ubuntu?

    - by Billy Bob Thornton
    I have a weird problem on my production web server running Apache on Ubuntu: it compresses my images thereby dramatically lowering their quality! Actually I have two virtual hosts running, each located in a different folder. Wether I display .gif images by navigating on the two sites, or acceding them directly by their url, their size and quality are invariably degraded. I tried with three different browsers: same problem. Using them on other sites on the Web: no problem. Of course I disabled mod_deflate on the server (which should not compress images anyway), but the phenomenon remains. On my local développement server, running the same configuration, everything is Ok. Now I'm completely lost! For the record, my configuration: Ubuntu 10.04, Apache 2, Php 5.

    Read the article

  • How do I securely execute commands as root via a web control panel?

    - by Chris J
    I would like to build a very simple PHP based web based control panel to add and remove users to/from and add and remove sections to/from nginx config files on my linode vps (Ubuntu 8.04 LTS). What is the most secure way of executing commands as root based on input from a web based control panel? I am loathe to run PHP as root (even if behind an IP tables firewall) for the obvious reasons. Suggestions welcome. It must be possible as several commercial (and bloated, for my needs) control panels offer similar functionality. Thanks

    Read the article

  • How does one check whether the OS X "disabled" flag for launchd services is set?

    - by Charles Duffy
    According to the man page for launchctl (emphasis mine):    -w   Overrides the Disabled key and sets it to false. In previous versions, this option would modify the configuration file. Now the state of the Disabled key is stored elsewhere on-disk. Because the current state of the disabled flag is no longer set in the .plist file itself, checking for the Disabled key is no longer an accurate way to tell if the service will run on next boot. Where is this "elsewhere on-disk"? More to the point (and more importantly), how does one check whether this flag is set? Also, is it possible to set a service to run on next boot without forcing it to start immediately (as with launchctl load -w /Library/LaunchDaemons/my-service.plist)?

    Read the article

  • How do I install and run Tomcat on port 80 as my only web server? (Rooted Ubuntu box)

    - by gav
    Hi All, tl;dr - I have a rooted linux box that I want to run tomcat on as a server (No Apache Web Server) how would you set this up avoiding common security pitfalls? I've written a Grails App that I want to run on a VPS I rent. The VPS has very little memory and I am using it for the sole purpose of running this application so I don't need the apache web server. This is my first venture into Server administration and I'm sure to fall into some well known traps. Should I use iptables to redirect requests from port 80 to 8080? Should I run tomcat as root or as it's own user? What configuration settings would be good for a low memory system expecting less than 10 concurrent users? Hopefully an easy one for you! Anyone who could link to a tutorial would be a personal hero destined for great things no doubt. Gav

    Read the article

  • Subversion for web designer: repository on a network share and ftp to the live server?

    - by ceatus
    My configuration: htdocs on a windows network share (z:) web developers check out with dreamweaver modify and check in back to the drive z LAMP running on a Ubuntu server virtualized on Hyper-V with apache that point on the z drive for dev in order to test the websites Upload by FTP on the live server Now: I need multiple access to the repository, keep them on a network shares and we manage about 200 websites. All the web developers, administrators and IT need to access to the share. I found out that creating a svn server is the best way for me, so I created it on a Ubuntu Server which is virtualized on Hyper-V. Right now I have the repos local on the Ubuntu Server but I'd like them on my network drive and I'd like to have a post-commit, if possible, in order to ftp directly on my live server. Do you guys think that a WebDav solution would be better? Thanks in advance Angelo

    Read the article

  • How can I handle a .org domain on my own nameserver without paying for unwanted services?

    - by etuardu
    I have a dot org domain that I use to run a website. Until now, I had an account onto a hosting+domain provider. Recently I thought to run the website on my own webserver and to handle the domain on my own nameserver. What do I need to do in order to handle my .org domain by my own? Do I still need a registrar? Is there a more direct way that pir.org provide in order to fill in just a nameserver to be bound to a domain name?

    Read the article

< Previous Page | 683 684 685 686 687 688 689 690 691 692 693 694  | Next Page >