Search Results

Search found 31298 results on 1252 pages for 'google eclipse plugin'.

Page 1005/1252 | < Previous Page | 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012  | Next Page >

  • How can I restrict a group to reading only two particular folders with Windows Server?

    - by Lord Torgamus
    I have a group of users on Windows Server 2003 who need to be able to read the contents of two directories but not be able to access anything else on the server (including read-only access). One of the directories is K:\projectFour\config — and the other is similarly formatted — so it would be okay for group members to be able to list the contents of K:\ and K:\projectFour\ but not actually read anything in those directories. I've found several resources via SF/Google, including how to restrict individual folders/drives and how to allow users to only run specific executables, but that information ultimately didn't solve my issue. Sorry if this is a really simple thing to do, I'm usually a developer and don't know the first thing about servers or group policies. Finally, I should mention that this isn't a fully concrete question, as it will be implemented eventually but I don't personally have a copy of Windows Server 2003 to test with right now.

    Read the article

  • showing hiding toolbars with shortcuts in firefox

    - by Edwinistrator
    I'm a webdeveloper and using firefox with the bookmarks toolbar, the webdeveloper toolbar and the google toolbar. Are there shortcuts in firefox 3.5 to hide and show these toolbars. If not, may be an add on wich works in 3.5 to create a small script to hide these toolbars, bind with a shortcut? I found keyconfig addon but won't work in 3.5. I also found this script what sounds great. Anyone knows how to get it working? http://superuser.com/questions/77206/shortcut-key-for-bookmar-toolbar-in-firefox var toolbar = document.getElementById("PersonalToolbar"); toolbar.collapsed = !toolbar.collapsed; document.persist(toolbar.id, "collapsed");

    Read the article

  • Is there some standard way to publish a calendar feed?

    - by CMP
    For a web application I am working on, I would like to be able to give the user a single url that they can enter into the calendar application of their choosing to have events from our application show up in their calendar. Most other sites I have seen that do similar things will have a one time download of an .ics file that can be imported. If I have to require my users to download a new file every time the schedule changes, it sort of defeats the purpose of having the feed at all. The calendar can change many many times a day. What I would really like is something like rss where their calendar program can look up a url and automatically see the most recent data. Does anything like this exist? Our main target is mobile devices, so it really should be supported by iCal and google Calendar. Anything else is bonus.

    Read the article

  • Server problem (duh)

    - by j-t-s
    Sorry the title couldn't be more specific. I installed Abyss Web Server. I'm running Windows XP Home Edition and I have Wireless Mobile Broadband internet. I used to be able to access (and other people on other networks) my site by entering my ip address in the browser, but after I formatted, and the installed abyss web server again, this does not work anymore. There are no errors. I CAN visit my own site by entering my ip address BUT anybody else can't do the same, it just says "connecting" in the browser's statusbar and it never changes. I have consulted the docs and have found no help. Google hasn't helped with this problem either. Can somebody please help? Thank you :)

    Read the article

  • Server problem (duh)

    - by j-t-s
    Sorry the title couldn't be more specific. I installed Abyss Web Server. I'm running Windows XP Home Edition and I have Wireless Mobile Broadband internet. I used to be able to access (and other people on other networks) my site by entering my ip address in the browser, but after I formatted, and the installed abyss web server again, this does not work anymore. There are no errors. I CAN visit my own site by entering my ip address BUT anybody else can't do the same, it just says "connecting" in the browser's statusbar and it never changes. I have consulted the docs and have found no help. Google hasn't helped with this problem either. Can somebody please help? Thank you :)

    Read the article

  • What are my choices for server side sandboxed scripting?

    - by alfa64
    I'm building a public website where users share data and scripts to run over some data. The scripts are run serverside in some sort of sandbox without other interaction this cycle: my Perl program reads from a database a User made script, adds the data to be processed into the script ( ie: a JSON document) then calls the interpreter, it returns the response( a JSON document or plain text), i save it to the database with my perl script. The script should be able to have some access to built in functions added to the scripting language by myself, but nothing more. So i've stumbled upon node.js as a javascript interpreter, and and hour or so ago with Google's V8(does v8 makes sense for this kind of thing?). CoffeeScript also came to my mind, since it looks nice and it's still Javascript. I think javascript is widespread enough and more "sandboxeable" since it doesn't have OS calls or anything remotely insecure ( i think ). by the way, i'm writing the system on Perl and Php for the front end. To improve the question: I'm choosing Javascript because i think is secure and simple enough to implement with node.js, but what other alternatives are for achieving this kind of task? Lua? Python? I just can't find information on how to run a sandboxed interpreter in a proper way.

    Read the article

  • Windows 7 Sub-Folders hidden in "Program Files" directory

    - by ron tornambe
    I have Google searched for an hour now and I am confounded. I am using InnoSetup to install a .NET Winforms application that creates directories and folders on the fly. (I have set the folder options to display hidden files, folders...) Although the files that are added to "created" folders appear within the application, they do not show when using Windows Explorer or even when issuing a Dir from a command prompt. I have also modified the application to display (and delete) the contents of these (seemingly imaginary) folders, so I am sure they exist. What am I missing?

    Read the article

  • Remote Access to Owncloud Server

    - by John
    I'm currently trying to setup my own own-cloud server, and I've got it fully installed, configured, and accessible from within my own local network. I cannot figure out how to access it from the outside. So far I've: Successfully setup port-forwarding on my local router. I've done so via 'single port forwarding' and 'port range forwarding' Ports 80, 443, 3306 (Apache-Full and MySQL) Successfully obtained my external IP address. I've also tested this magic number from within the network at #insertIPhere/owncloud and it did work. Successfully setup the server using SQLite Successfully setup the server using MySQL Created the following exceptions in my firewall: Allow In Port 80 (Apache Full) Allow In Port 443 (Apache Full) Allow In Port 3306 (MySQL) Tried connecting from several different remote networks, as to troubleshoot something on their end As far as trying to access it, I'm doing so through Google-Chrome and Mozilla Firefox trying to reach the server through #insertIPhere/owncloud using the above public IP address. So what have I missed, and how do I access my server from outside? Thanks in advance for your help and time, and I apologize in advance for what will probably result in my noobish mistake in networking. I've looked at the official documentation. And also this question here.

    Read the article

  • Removing (Presumably) Extraneous Network Adapters from Device Manager (eg WAN Miniport)

    - by Synetech inc.
    Can anyone shed some light on the default items in the Network Adapters branch of the Windows Device Manager? In addition to the network card, there are always a bunch of other things that I cannot find any useful information on such as RAS Asynch Adapter and all the WAN Miniports (IKEv2, IP(v6), L2TP, Network Monitor, PPPOE, PPTP, SSTP). I would like to trim it down and uninstall whatever possible but cannot find out exactly what these items are responsible for (and therefore whether or not they are needed on my system). Most of the pages found with Google are either people trying to fix an error with such an item or someone asking what it is and being given an unhelpful, pat response like “just leave them alone” or “they’re necessary”. I highly doubt that is the case and I’m certain that at least some items can be removed because even if they become necessary in the future they can be added again (for example installing Network Monitor or Protowall reinstalls the miniport drivers anyway).

    Read the article

  • Purpose oriented user accounts on a single desktop?

    - by dd_dent
    Starting point: I currently do development for Dynamics Ax, Android and an occasional dabble with Wordpress and Python. Soon, I'll start a project involving setting up WP on Google Apps Engine. Everything is, and should continue to, run from the same PC (running Linux Mint). Issue: I'm afraid of botching/bogging down my setup due to tinkering/installing multiple runtimes/IDE's/SDK's/Services, so I was thinking of using multiple users, each purposed to handle the task at hand (web, Android etc) and making each user as inert as possible to one another. What I need to know is the following: Is this a good/feasible practice? The second closest thing to this using remote desktops connections, either to computers or to VM's, which I'd rather avoid. What about switching users? Can it be made seamless? Anything else I should know? Update and clarification regarding VM's and whatnot: The reason I wish to avoid resorting to VM's is that I dislike the performance impact and sluggishness associated with it. I also suspect it might add a layer of complexity I wish to avoid. This answer by Wyatt is interesting but I think it's only partly suited for requirements (web development for example). Also, in reference to the point made about system wide installs, there is a level compromise I should accept as experessed by this for example. This option suggested by 9000 is also enticing (more than VM's actually) and by no means do I intend to "Juggle" JVMs and whatnot, partly due to the reason mentioned before. Regarding complexity, I agree and would consider what was said, only from my experience I tend to pollute my work environment with SDKs and runtimes I tried and discarded, which would occasionally leave leftovers which cause issues throught the session. What I really want is a set of well defined, non virtualized sessions from which I can choose at my leisure and be mostly (to a reasonable extent) safe from affecting each session from the other. And what I'm really asking is if and how can this be done using user accounts.

    Read the article

  • DNS Replication issue

    - by BillN
    We host the DNS for our domain. Two weeks ago, the developer requested that we setup a new zone 'dev.ourdomain.com' and place two host records in it my.dev.ourdomain.com and admin.dev.ourdomain.com. We added the zone to our DNS and added A records for the host. Now a week later, some DNS servers like google (8.8.8.8) and gtei (4.2.2.2) will resolve the hosts, but others like OpenDNS (208.67.222.222 ) and ATT Uverse (68.94.156.1) cannot resolve it. Any Ideas?

    Read the article

  • Can ping localhost but can't browse

    - by Anna
    I know this is a pretty common question but I did my research and couldn't find a solution for this issue. I'm configuring a development application server and I came to the point where I can ping both localhost and 127.0.0.1, but I cannot browse either of them from IE or Firefox. I can browse and ping other websites (such as google) just fine. I tried flushing the dns (ipconfig /flushdns), restarting the IIS Admin service, restarting IIS itself, etc, and nothing seems to work. The results from ipconfig /all shows IP Rounting Enabled = No and WINS Proxy Enabled = No. Hwat is intriguing to me is that I compared everything in IIS in the dev environment with the production environment and the settings are the same, but I can browse localhost in production, but not in dev! What could be causing the inability to browse localhost and 127.0.0.1 from IE and Firefox?

    Read the article

  • emacs and putty on windows 7

    - by twilbrand
    My workstation was recently updated to Windows 7. I've downloaded putty and have configured it to the same settings I had under Vista. Whenever I ssh to a vm running Centos 5.4 and try to run emacs on a file, I'm getting an error about a connection to an X server: [ecto1 ~]$ emacs foo.bar Connection lost to X server `localhost:10.0' I never received this error message when I had Vista. I can get around it by aliasing emacs to 'emacs -nw', but I don't feel that I should have to do this. My co-worker has the same hardware that had the same upgrade and his sessions do not seem to be doing this. Any advice? I can't find anything on google and don't know where else to start. [ecto1 ~]$ emacs -version GNU Emacs 21.4.1

    Read the article

  • Is a 302 redirect to a random URL from the homepage an SEO problem?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • Postfix - Block email from non-existent local addresses

    - by Kelso.b
    My question is very similar to this one, but for postfix. We keep getting emails from addresses like "[email protected]" delivered to other "@ourdomain.com" addresses. From my google research, I understand it might not be practical to verify the email originated from our IP or VPN (Although this would be ideal, so if you can think of a way to do this, let me know), but in most of these cases the sender address (ex. "accounting") is not a valid account. I imagine there must be a way to make sure that a local account exists before delivering the message.

    Read the article

  • Attempting to caue packet loss with netem doesn't work - possibly because of NAT (but delay does work)

    - by tomdee
    I have traffic from a WIFI access point routed via an Ubuntu box. I have two network interfaces which are NATed *filter :INPUT ACCEPT [11:690] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [37:6224] -A FORWARD -s 192.168.2.0/24 -i eth1 -o eth0 -m conntrack --ctstate NEW -j ACCEPT -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT COMMIT # Completed on Thu Mar 15 13:37:21 2012 # Generated by iptables-save v1.4.10 on Thu Mar 15 13:37:21 2012 *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -j MASQUERADE COMMIT If I run a ping app on an Android device connected to the WIFI network I can happily ping google. If I use netem to introduce some delay tc qdisc change dev eth0 root netem delay 100ms I can clearly see pings taking longer. If I use netem to introduce some packet loss tc qdisc change dev ifb0 root netem loss 50% then I see no change. Packet loss does work fine for locally generated traffic, just not for traffic coming in over the network that's being NATed. Any ideas how to sort this out?

    Read the article

  • Internet Explorer 8 only running as process not application

    - by Lord Peter
    Internet Explorer 8 on XP SP3 starts without browser window. Task manager doesn't show application, but iexplore.exe is listed twice in process window. Process Explorer reports "no visible windows found for this process" when I try to "bring to front" in the iexplore.exe properties dialog. Have reinstalled (twice), full scanned with MBAM/MSSE/SpyBot etc, re-registered ieproxy.dll (another Google-inspired tip!), run without addons (-extoff switch), and still same problem. Recently uninstalled VMWare Player and wondered whether problem related to VM network adapter somehow, but Firefox still works perfectly. This is one of my home machines, not critical, and it is backed up, so I will restore if I have to. But any and all suggestions will be gratefully received. It would be nice to understand what might have happened, and perhaps others may benefit from any knowledge that comes to light.

    Read the article

  • Moving from a static site to a CMS with new URLs and meta-data for pages

    - by Chris J
    Hi I am in the process of rebuilding a site from static pages to a CMS which will be using mod_rewrite to generate new page URLs. In this process our marketing people and myself have decided to tidy up the descriptions, keywords and titles. Eg: a page which who's URL is currently "website-name/about_us.html" and has a title of "website-name - something not quite page specific" will change to "website-name/about-us/" and title: "about us - website-name" and may have a few keywords and the description changed. Our goal with updating the meta data is to improve our page rankings and try to keep in line with some best practices for SEO. Though our current page rankings are quite good in many aspects, there is room for improvement. All of the pages will also have content changes (like rearranging heading tags, new menu on all pages, new content in footer, extra pieces of dynamic content relating to other pages). In this new site process I plan to use 301 redirects for all the old URLs pointing to the new URLs. My question is what can I expect to happen to the page rankings in Google, in the sort term and long term? Will this be like kicking off a new site which will have to build up trust over time or will the original page rankings have affect?

    Read the article

  • Making efficeint voxel engines using "chunks"

    - by Wardy
    Concept I'm currently looking in to how voxel engines work with a view to possibly making one myself. I see a lot of stuff like this ... https://sites.google.com/site/letsmakeavoxelengine/home/chunks ... which talks about how to go about reducing the draw calls. What I can't seem to understand is how it actually saves draw call counts on the basis of the logic being something like this ... Without chunks foreach voxel in myvoxels DrawIfVisible() With Chunks foreach chunk in mychunks DrawIfVisible() which then does ... foreach voxel in myvoxels DrawIfVisible() So surely you saved nothing ?!?! You still make a draw call for each visible voxel do you not? A visible voxel needs a draw call in either scenario. The only real saving I can see is that the logic that evaluates a chunk will be able to determine if a large number of voxels are visible or not effectively saving a bit of "is this chunk visible" cpu time. But it's the draw calls that interest me ... The fewer of those, the faster the application. EDIT: In case it makes any difference I will probably be using XNA (DX not OpenGL) for my engine so don't consider my choice of example in the link above my choice of technology. But this question is such that I doubt it would matter.

    Read the article

  • Issue with https:// url going to an unknown location

    - by Brandon
    We have a website (ASP.NET/Plesk 9.5.5) that can be accessed just fine through the regular URL (http://example.com). However when accessing the site through https://example.com the site displays the invalid security certificate warning, which is fine since we don't have an SSL certificate. If I add an exception, I'm sent to a completely separate site that is apparently hosting a malware script (I'm still on https://example.com though). Because of this Google has flagged the site as dangerous. I can't find anything in the Plesk panel that would help fix this, and as far as I can tell those files don't exist on our server. How do I tell where the https:// link is sending me? I'm not that familiar with DNS, but is that what is causing this behavior?

    Read the article

  • There is a porn domain pointing to my site

    - by Nicolas Martel
    Let's say example.com is my real site, and fooexample.com is the porn site. fooexample.com are pointing to my ip. Now you could think, just don't mind it right. Well the thing is that they are driving load of traffic. Not only that, but my main domain example.com become unavailable after a couple of minutes and the only domain that work is either fooexample.com or none of those 2. What i have done so far was using mod_rewrite to redirect the porn site to google but my domain still become unavailable. Blocking the ips served no result either. I hope someone will be able to help me because this is a huge problem right now. Thanks.

    Read the article

  • can canonical links be used to make 'duplicate' pages unique?

    - by merk
    We have a website that allows users to list items for sale. Think ebay - except we don't actually deal with selling the item, we just list it for sale and provide a way to contact the seller. Anyhow, in several cases sellers maybe have multiple units of an item for sale. We don't have a quantity field, so they upload each item as a separate listing (and using a quantity field is not an option). So we have a lot of pages which basically have the exact same info and only the item # might be different. The SEO guy we've started using has said we should put a canonical link on each page, and have the canonical link point to itself. So for example, www.mysite.com/something/ would have a canonical link of href="www.mysite.com/something/" This doesn't really seem kosher to me. I thought canonical links we're suppose to point to other pages. The SEO guy claims doing it this way will tell google all these pages are indeed unique, even if they do basically have the same content. This seems a little off to me since what's to stop a spammer from putting up a million pages and doing this as well? Can anyone tell me if the SEO guy's suggestion is valid or not? If it's not valid, then do i need to figure out some way to check for duplicated items and automatically pick one of the duplicates to serve as an original and generate canonical links based off that? Thanks in advance for any help

    Read the article

  • X-notifier doesn't work in Chromium Browser

    - by cipricus
    It just keeps checking in vain. Also cannot import or export data, but get this error I use the latest versions of both in Lubuntu 12.04. In Google Chrome it works. What could it be the problem? Edit - following vasa1's comment - running sudo aa-status i get apparmor module is loaded. 16 profiles are loaded. 16 profiles are in enforce mode. /sbin/dhclient /usr/bin/evince /usr/bin/evince-previewer /usr/bin/evince-previewer//launchpad_integration /usr/bin/evince-previewer//sanitized_helper /usr/bin/evince-thumbnailer /usr/bin/evince-thumbnailer//sanitized_helper /usr/bin/evince//launchpad_integration /usr/bin/evince//sanitized_helper /usr/lib/NetworkManager/nm-dhcp-client.action /usr/lib/connman/scripts/dhclient-script /usr/lib/cups/backend/cups-pdf /usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper /usr/sbin/cupsd /usr/sbin/ntpd /usr/sbin/tcpdump 0 profiles are in complain mode. 3 processes have profiles defined. 3 processes are in enforce mode. /sbin/dhclient (1562) /usr/sbin/cupsd (916) /usr/sbin/ntpd (1695) 0 processes are in complain mode. 0 processes are unconfined but have a profile defined.

    Read the article

  • opengl libraries for ubuntu running on Virtual Box

    - by vboxuser
    I am having Ubuntu 10.04 running on VirtualBox. Guest additions are installed successfully. But Guest additions do not provide the OpenGL library. To run open GL demo what needs to be installed. Some google links suggest Mesa utils. By my understanding is Mesa utils do not use vbox drivers to achieve hardware acceleration. Ubuntu 10.04 is running on VirtualBox and not on bare metal. The host comprise of Zx400 CPU with nvidia graphics driver. But this is not relevant since I would be using the vbox drivers provided by guest additions on VirtualBox. In this scenario how do I get OpenGL libraries on Ubuntu?. (Specially I am looking for solution other than Mesa, since mesa has support only for VMware and not for VirtualBox) How to get Opengl libraries which use vbox drivers for 3D hardware accleration?? As i already mentioned, ubuntu 10.04 is running on VirtualBox and not on bare metal. the host comprise of Zx400 CPU with nvidia graphics driver. but this is not relevant since i would be using the vbox drivers provided by guest additions on VirtualBox. In this scenario how to i get OpenGL libraries on Ubuntu. (Especially i am looking for solution other than Mesa, since mesa has support only for VMware and not for VirtualBox Thank you

    Read the article

  • How to extend a window on 4 virtual desktops on Windows 7

    - by Patrice
    This site is very cool, and i often get many answers in it :) But today, I've a question for you. My Problem is: - I want to use a virtual desktop or resolution expander for windows 7 - I want to multiply per 2 a window resolution (i.e. seeing only 25% of one window on a screen) I tried simple tricks that didn't work for me, and installed a virtual desktop (Dexspot, a great one), but I can't manage to strech a window (Google chrome) over the 4 desktops simultaneously. Do you understand my problem, and have an answer to it? This would be great help ;-) See ya! Patrice

    Read the article

< Previous Page | 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012  | Next Page >