Search Results

Search found 21717 results on 869 pages for 'setup versions'.

Page 488/869 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • My saved drafts become unread email in Windows Live Mail and Gmail IMAP

    - by Valamas
    I have setup windows live mail with my gmail account in IMAP mode. When I draft an email and save it. It saves in the drafts automatically. Within a minute, my WLM sound alerts new mail and shows mail icon in the system tray. However, this is for the draft I have saved which appears unread. To make the mail notification icon go away in the system tray, I go off and mark the draft as read. This repetition is tedious and distracting. How can I avoid this annoyance? thanks

    Read the article

  • In Debian, how can I route rtorrent to a certain network interface, say ppp0?

    - by Timo
    I have purchased a PPTP account from StrongVPN and configured the setup by these (http://pptpclient.sourceforge.net/howto-debian.phtml#configure_by_hand) instructions and now I want to have rtorrent do its communication to the Internet through this VPN tunnel. So I have a ppp0 interface, which has the VPN tunnel. What is the next step? I guess it has something to do with the routing tables? I am new to routing, so please be elementary and precise so that I understand! Thank you!

    Read the article

  • Thanks to all attendees in Seattle and Toronto

    - by Mike Dietrich
    Must be an Oracle sponsored number plate ... Thanks to everybody who did attend to our Upgrade Workshops in Seattle and Toronto past week. Seattle had a quite unusual track setup with two parallel breakout sessions. We hope you've enjoyed it as well. And you'll find the slides for the keynote "New Features" and the "Upgrade Workshop - The Whole Story" presentations below. Toronto was quite amazing as well - with so many (hope not too many) people in this slightly crowded room at the Interconti in Toronto. We've got a lot of interesting and sometimes challenging questions. And we would like to thank you for your patience Please find all the slides here: Upgrade Workshop ~545 slides "The Whole Story" presentation New Features for Oracle Database 11g Release 2 - Roy's keynote from Seattle  For me it was the first time in Canada and even though it was a very short stopover I did enjoy it very much. Roy and me had a dinner at CN Tower and besides good food some marvelous view. Didn't know before that Toronto within its city limits it's the fifth most populous city in North America. And even though paritally Air Canada ground personell was on strike I did catch my flight to Boston after the workshop Thanks again and hope to see you next time again - happy upgrades Mike

    Read the article

  • It's raining development VirtualBox images again!

    - by pieter.humphrey
                                                The cloud has burst.. forecast is looking like large amounts of VirtualBox images are coming down from OTN.   Are you finding the install for Database, WebLogic, SOA or WebCenter to be complicated when your goal is simply to setup a development sandbox?  Sick of giving your credit card info to cloud vendors, only to be stuck in a walled garden where you can't connect to your own internal systems?   Are you new to Java and just wanted something technical to sink your teeth into?  Or maybe you just want to put some stuff on that new terabyte drive you got? ;) Have no fear.  VirtualBox 4.0 is here.  We've have several development (read: don't use in production) images that were designed for use for in-person events, but we're posting them for your enjoyment.  Some of the images have step by step hands on labs baked into them too!  So get a freeware download manager like BitComet, install VirtualBox, an MD5 checksum utility (if you are on windows) and get wet!   del.icio.us Tags: java,development,java ee,java fx,virtualBox,virtualization,database,soa,weblogic,jdeveloper,eclipse,netbeans,sql developer,times ten,zend,php,SOA,SOA Suite,BPM,BAM,B2B,hudson,maven,subversion,Eclipse,Solaris,OTN Technorati Tags: java,development,java ee,java fx,virtualBox,virtualization,database,soa,weblogic,jdeveloper,eclipse,netbeans,sql developer,times ten,zend,php,SOA,SOA Suite,BPM,BAM,B2B,hudson,maven,subversion,Eclipse,Solaris,OTN

    Read the article

  • Using 2 Transparent HAProxy for load balancing

    - by Nyxynyx
    We can configure HAProxy to be a transparent proxy by using the guide here, where one of the steps says ...to put the backend servers in a different subnet to the front end clients and make sure that the default gateway points back at the HAProxy load balancer. However when we need to have 2 transparent HAProxy in front of our balanced servers (for redundancy), it seems like this wont work as we can only set one gateway for our balanced servers. What will be the correct way to setup the system such that we can have 2 transparent HAProxy infront of the balanced servers? The main reason for having transparent proxies is the need to find the client's IP addresses over TCP.

    Read the article

  • Unable to use Gmail in Thunderbird 3

    - by Jatin Ganhotra
    Mozilla Thunderbird v.3.1.7 I am trying to setup Gmail, but none of the settings are working. I have tried every resource: Blogs, tutorials Instructions by Google Instructions by Thunderbird Questions here But, still its not working. My settings are as follows Server Settings Server Type: IMAP Mail server Server Name: imap.gmail.com Username: [email protected] Port: 993 Default: 993 Connection Security: SSL/TLS Authentication method: Encrypted password Outgoing server (SMTP) Server Name: smtp.gmail.com Port: 587 Default: 25 Connection Security: STARTTLS Authentication method: Encrypted password Username: [email protected] IMAP is enabled in my Gmail settings. ERROR: Connection to the server [email protected] timed out. I am behind a proxy server and I have configured those settings under: Thunderbird Preferences - Advanced - Network and Disk Space - Connection Settings - Manual Proxy Configuration The proxy configuration works, as when I created a Blogs and News feeds a/c, it was working properly and fetching the feeds for me. So, Thunderbird is configured properly as per the proxy settings. Help me.

    Read the article

  • Password problem while creating domain

    - by Murdock
    Hi, I'm freshman so far in server management stuff but this seems to be clearly against logic. After updating my Windows Server 2008 Standard 32bit, installing DNS server and AD DS I wanted to create domain via using CMD and dcpromo.exe setup. But no matter if I disable demand for comlex password in Password policies or create a password which fully comply with requirements for strong and complex password, still I can't get any further and it says that my password doesn't meet requirements. I'm also asked there to activate password demand by NET USER -passwordreq:yes and when I do so, this password doesn't work any more and I have to remove it from other admin account to be at least able to login with proper Administrator account.

    Read the article

  • Hosting multiple sites on a single webapp in tomcat

    - by satish
    Scenario: I have a website - www.mydomain.com. Registered users will be given the choice of getting a permanent url to their account on mydomain.com as a subdomain like (username.mydomain.com) or they can opt to have their own domain like www.userdomain.com. So the user can access his/her account through the subdomain URL or their own hostname and the request should be forwarded to a specific url on mydomain.com. For example: xyz.mydomain.com or www.xyz.com should give the user account from www.mydomain.com/webapp/account?id=xyz. The user should be completely unaware about where the content is coming from. Setup: My website is running as a webapp in tomcat 5.5.28 with apache as the web server. I am using a VPS which means I have control over all the configuration files (apache, tomcat and dns server). Can you tell me what are the configurations needed to achieve the above scenario??

    Read the article

  • Can two users both control a third machine simultaneously using Synergy?

    - by Reason
    I've been a Synergy user for some time now, as I use a PC on the left side of my Mac. My girlfriend and I both have our desks on each side of the other, and we'd like to know if it were possible for the both of us to control the PC in the middle, with our own separate mouse & keyboards. Here's a crude drawing of our setup (1) her pc (2) my pc (3) my mac Currently, 3 is running a synergy server, and 2 is running the client. But like I said, I'm wondering if there's a way for 1 & 3 to both control 2 with their own mouse and keyboard. I'd ~love~ to have it set up where we could go even farther, and have both of our mice & keyboards able to control all 3 computers at the same time, for moments when we need to click or press keys for each other. But that seems a little too much to ask! Any thoughts?

    Read the article

  • Getting Xbox Live via a wired network with my laptop that has internet access wirelessly

    - by Alex Franco
    I'm running the latest version (as of yesterday anyways) of Ubuntu Desktop 64bit, but installed on my laptop if it makes a difference. I had Windows 7 preinstalled when i bought it and it worked fine with the wireless from my house and bridging the connection with a LAN to my xbox for Live. Now with Ubuntu I tried the same setup, but I'm unfamiliar with Ubuntu so I didn't get far. Best I got so far is wireless internet on my laptop and a wired connection to the xbox that continually connects and disconnects. Heres my network settings. if theres fields not included its because theyre empty on mine or theyre my MAC address or network password Wireless Network 1 settings: Connect Automatically: Checked. Available to all Users: Checked Wireless: SSID: Franco's Mode: Infrastructure MTU: Automatic IPv4 Settings: Method: Automatic (DHCP) IPv6 Settings: Method: Automatic Wired Network 1: Connect Automatically: Checked Available to all Users: Checked Wired: MTU: Automatic IPv4 Settings: Method: Automatic (DHCP) IPv6 Settings: Method: Automatic Any help would be greatly appreciated. EDIT: 6:26pm It seems to be staying connected now. Doing the Network test on my xbox it pickups the network, but cannot detect any PC. Restarting the Xbox, however, leaves my computer unable to connect bringing up the Wire Network disconnected 'blip' every minute or so again. Before I had restarted the Xbox it said "Connected 100 MB/s". Now it only says "connecting". I did have my computer and xbox on in this Wired Network Disconnected blip cycle for a long period of time so it may have finally connected, just without the ability to detect my laptop. I left for 2 hours or so in the middle of typing up the original question. I finished posting this when i got back and then tried to mess with it a bit again, in case youre wondering why i didnt include this before... I've said too much. Forgive my long-winded fingers :p

    Read the article

  • Best way to use my windows box as a backup?

    - by user29336
    I put a 1.5 TB HD in my Windows 7 box and my main computer is a MBP. I have a lot of professional files/folders in a FireWire 800 external HD connected to the MBP and I want to use the 1.5 TB HD in my Windows 7 box as a backup for both the HD and MBP. Right now I am just copying files manually to the HD over the network and that's very slow and open to failure (not rsync'd.) Anyone suggest some appropriate solutions? Should I just figure out how to setup RSync on the windows box or is there a better alternative? Thanks!

    Read the article

  • How should an API use http basic authentication

    - by user1626384
    When an API requires that a client authenticates to it, i've seen two different scenarios used and I am wondering which case I should use for my situation. Example 1. An API is offered by a company to allow third parties to authenticate with a token and secret using HTTP Basic. Example 2. An API accepts a username and password via HTTP Basic to authenticate an end user. Generally they get a token back for future requests. My Setup: I will have an JSON API that I use as my backend for a mobile and web app. It seems like good practice for both the mobile and web app to send along a token and secret so only these two apps can access the API blocking any other third party. But the mobile and web app allow users to login and submit posts, view their data, etc. So I would want them to login via HTTP Basic as well on each request. Do I somehow use a combination of both these methods or only send the end user credentials (username and token) on each request? If I only send the end user credentials, do I store them in a cookie on the client?

    Read the article

  • Patterns for Handling Changing Property Sets in C++

    - by Bhargav Bhat
    I have a bunch "Property Sets" (which are simple structs containing POD members). I'd like to modify these property sets (eg: add a new member) at run time so that the definition of the property sets can be externalized and the code itself can be re-used with multiple versions/types of property sets with minimal/no changes. For example, a property set could look like this: struct PropSetA { bool activeFlag; int processingCount; /* snip few other such fields*/ }; But instead of setting its definition in stone at compile time, I'd like to create it dynamically at run time. Something like: class PropSet propSetA; propSetA("activeFlag",true); //overloading the function call operator propSetA("processingCount",0); And the code dependent on the property sets (possibly in some other library) will use the data like so: bool actvFlag = propSet["activeFlag"]; if(actvFlag == true) { //Do Stuff } The current implementation behind all of this is as follows: class PropValue { public: // Variant like class for holding multiple data-types // overloaded Conversion operator. Eg: operator bool() { return (baseType == BOOLEAN) ? this->ToBoolean() : false; } // And a method to create PropValues various base datatypes static FromBool(bool baseValue); }; class PropSet { public: // overloaded[] operator for adding properties void operator()(std::string propName, bool propVal) { propMap.insert(std::make_pair(propName, PropVal::FromBool(propVal))); } protected: // the property map std::map<std::string, PropValue> propMap; }; This problem at hand is similar to this question on SO and the current approach (described above) is based on this answer. But as noted over at SO this is more of a hack than a proper solution. The fundamental issues that I have with this approach are as follows: Extending this for supporting new types will require significant code change. At the bare minimum overloaded operators need to be extended to support the new type. Supporting complex properties (eg: struct containing struct) is tricky. Supporting a reference mechanism (needed for an optimization of not duplicating identical property sets) is tricky. This also applies to supporting pointers and multi-dimensional arrays in general. Are there any known patterns for dealing with this scenario? Essentially, I'm looking for the equivalent of the visitor pattern, but for extending class properties rather than methods. Edit: Modified problem statement for clarity and added some more code from current implementation.

    Read the article

  • Keep printed documents on Windows Server 2008 R2 Print Server

    - by MadBoy
    I've setup Windows 2008 R2 as print server. I have checked option Keep printed document option for all printers and it works fine. Users print their stuff and i can see what they are doing. Problem is everyone sees all documents that are getting printed which is not always the best idea. Is there a way to: Limit print jobs to be only seen by people who printed them and admins Limit print jobs to be only seen on server (from within Server Manager) and so print jobs dissapear when print job is done from user queue (but then admins are still able to see it and track what's printed and when for reporting purposes). Create some kind of access level list so that some people can see everything geting printed, some people see their print jobs and some people see nothing :-)

    Read the article

  • Postfix tutorial inconsistency

    - by Desmond Hume
    I'm following this tutorial to setup a Postfix/Dovecot mail server with Postfix Admin as a web front end. As regards directory structure for virtual mail users, the author of the tutorial writes: Virtual mail users are those that do not exist as Unix system users. They thus don't use the standard Unix methods of authentication or mail delivery and don't have home directories. That is how we are managing things here: mail users are defined in the database created by Postfix Admin rather than existing as system users. Mail will be kept in subfolders per domain and account under /var/vmail - e.g. [email protected] will have a mail directory of /var/vmail/example.com/me. But when he gives instructions about configuring Postfix Admin, he suggests this to be contained by Postfix Admin's config.inc.php: // Mailboxes // If you want to store the mailboxes per domain set this to 'YES'. // Examples: // YES: /usr/local/virtual/domain.tld/[email protected] // NO: /usr/local/virtual/[email protected] $CONF['domain_path'] = 'NO'; Is there an inconsistency?

    Read the article

  • Problem with Jumbo Frames

    - by Spookyone
    Hello, I am trying to set up jumbo frames on my gigabit home LAN but no luck so far. My setup is: * D-Link DIR-655 router, HW Revision A3, Firmware 1.21 EU * Synology DS107+, Firmware 3.0-1337 * Laptop w/ Win7 x64, external PCIx NIC managed by "Generic Marvel Yukon 88E8053 based Ethernet Controller" The router is supposed to support jumbo frames but doesn't feature any relevant setting. I set the Jumbo Packet value to 9000 on both the NIC and the Synobox but it doesn't work, ping -f -l 8972 says "Packet needs to be fragmented but DF set". Is there any other setting I overlooked, the DIR-655 doesn't actually support jumbo frames, or what else could be the problem?

    Read the article

  • Accessing Repositories from Domain

    - by Paul T Davies
    Say we have a task logging system, when a task is logged, the user specifies a category and the task defaults to a status of 'Outstanding'. Assume in this instance that Category and Status have to be implemented as entities. Normally I would do this: Application Layer: public class TaskService { //... public void Add(Guid categoryId, string description) { var category = _categoryRepository.GetById(categoryId); var status = _statusRepository.GetById(Constants.Status.OutstandingId); var task = Task.Create(category, status, description); _taskRepository.Save(task); } } Entity: public class Task { //... public static void Create(Category category, Status status, string description) { return new Task { Category = category, Status = status, Description = descrtiption }; } } I do it like this because I am consistently told that entities should not access the repositories, but it would make much more sense to me if I did this: Entity: public class Task { //... public static void Create(Category category, string description) { return new Task { Category = category, Status = _statusRepository.GetById(Constants.Status.OutstandingId), Description = descrtiption }; } } The status repository is dependecy injected anyway, so there is no real dependency, and this feels more to me thike it is the domain that is making thedecision that a task defaults to outstanding. The previous version feels like it is the application layeer making that decision. Any why are repository contracts often in the domain if this should not be a posibility? Here is a more extreme example, here the domain decides urgency: Entity: public class Task { //... public static void Create(Category category, string description) { var task = new Task { Category = category, Status = _statusRepository.GetById(Constants.Status.OutstandingId), Description = descrtiption }; if(someCondition) { if(someValue > anotherValue) { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.UrgentId); } else { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.SemiUrgentId); } } else { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.NotId); } return task; } } There is no way you would want to pass in all possible versions of Urgency, and no way you would want to calculate this business logic in the application layer, so surely this would be the most appropriate way? So is this a valid reason to access repositories from the domain?

    Read the article

  • esxi change MKS port

    - by Daniel Powell
    I need to connect to my home esxi box over the web however I cannot use the default port 902 for the console viewer due to firewall restrictions. Is there a way to change this port somewhere even if i can just do some nat and redirect any other port to that port? I've had a look around and when I try to connect to the esxi server in vSphere client I cant find anywhere I can specify the port. I know this is not the recommended way to do this but its a testing server and security is not an absolute must on this box. I also cannot setup a vpn to this box.

    Read the article

  • Needs free/ opensource network monitoring tool for office LAN

    - by Amit Ranjan
    I know there must be a lot similar questions on SU. Let me explain my setup first. I have 4-5 PC, Laptops and Few Android Phones in my office. To get them on a network , I have a UTStarCom, WA3002G1 ADSL2+ router with a landline broadband connection which has nothing to do with any PC except the configuration settings. Broadband channel is always on, we need to switch on the router and the internet is ready for us. No Internet Connection sharing is done via any PC. I have a limited 20GB monthly plan, which is consumed in 10-20 days, depending upon the download requirements. So in the above case, i need some suggestions from you: How do I monitor my Internet Bandwidth along-with the connected systems, realtime? Any free opensource tool available? Tweaks / Changes in PC to save bandwidth as my ISP do not have any Unlimited plan. PC and Laptops are Windows XP and/Or windows 7. Either of the platform tools are welcome.

    Read the article

  • Can't boot Windows Xp after intalling Ubuntu 12.04

    - by Omul Neted
    Here's the situation. I installed Ubuntu using the along side option. Everything went ok. When I restarted I went strait to Ubuntu and it worked beautifully. When I restarted and tried to enter windows, the loading screen appeared, and after 3 -4 seconds it restarted again. No error, no cursor waiting, no nothing. I looked on the internet for help and found several resources. I tried first lilo since it seemes that many people had they're issues solved with it. After lilo neither ubuntu nor Windows would start. I installed and used bootinfoscript. The RESULTS.txt can be seen below https://www.dropbox.com/sh/r3luoa672qe73uq/Mob13HhNiB After that I looked at Boot-Repair I did as instructed here Can't boot XP after Ubuntu Installation, how to fix? ,meaning I redid the mbr of my Ubuntu install using a generic mbr. with no success. The results of boot-repair are in the first link. Now when I restart my computer I don't even get the windows loading screen, just Missing operating system Missing operation system Operating system not found that's it. I did not use the fixboot or fixmbr option because I don't have a windows cd cabable of seeing my hdd drivers. The usual XP windows setup tells me that I have no hdd. Please help, I don't know what to to next. This is my first time with Ubuntu or any Linux OS.

    Read the article

  • can't configure openfire

    - by SnOrfus
    I'm trying to setup openfire on one of the servers here and I've gone through the windows installer, installed the service, started the service and I can't connect to the admin console. If I go to http://127.0.0.1:9090 (or http://127.0.0.1/index.html) all I get is a blank page. I also tried running the GUI instead of the service, and it said that it was listening on 127.0.0.1:9090 and when I navigate there or click "launch admin" I still get a blank page. What could be the problem? It's a windows server 2k3 machine with IIS running (runs a couple of other sites). edit openfire 3.6.4 I installed on my local machine without problems, so it's obviously something on the server that's causing it. There is no firewall installed on that server so I'm not sure what would be stopping it.

    Read the article

  • Weird routing problems with VPN

    - by Borek
    In our VPN setup I have to add a route to my routing table like this: route add 1.2.3.0 mask 255.255.255.0 172.16.1.1 -p Our internal addresses 1.2.3.x then use 172.16.1.1 as their gateway and both my local internet and work VPN can work at the same time. However, when I disconnect from VPN and reconnect again, I can't ping our servers even though the connection status is "Connected". When I do route print my previously added route is listed but it doesn't seem to work. So I try to execute that 'route add' command again and as expected, it tells me that The route addition failed: The object already exists. But - and that's the point - when I now try to ping our servers again, everything works! So every time, I have to execute this route add command that will fail but fix the issue at the same time. Any ideas what I might be doing wrong? My PC is Windows 7 x64, I am Administrator, UAC is enabled and the command prompt is run with elevated privileges.

    Read the article

  • Task scheduler does not kill task

    - by Andomar
    We have a scheduled task that sometimes hangs. It just stops responding. On Windows 2003, we had task scheduler configured to kill the task after 3 hours. It's a 32-bit process. On Windows 2008 R2, we've set "Stop the task if it runs longer than" and "If the running task does not end when requested, force it to stop". However, when the task hangs, it is never stopped, and stays in process explorer for days. Any clue why Windows Scheduler would not kill a process? (This post has a reproducible setup for this issue.)

    Read the article

  • JSP Include: one large bean or bean for each include

    - by shylynx
    I want to refactor a webapp that consists of very distorted JSPs and servlets. Because we can't switch to a web framework easily we have to keep JSPs and Servlets, and now we are in doubt how to include pages into another and how to setup the use:bean-directives effectively. At the first step we want to decouple the code for the core-actions and the bean-creation into servlets. The servlets should forward to their corresponding pages, which should use the bean. The problem here is, that each jsp consists of different sub- and sub-sub-jsp that are included into another. Here is a shortend extract (because reality is more complex): head header top navigation actionspanel main header actionspanel foot footer Moreover each jsp (also the header and footer) use dynamic data. For example title and actionspanel can change on each page-reload or do have links and labels that depend on the processing by the preceding servlet. I know that jsp-include-directives should only be used for static content und should be avoided for dynamic content. But here we have very large pages, that consist of many parts. Now the core questions: Should I use one big bean for each page, so that each bean holds also data for header and footer beside its core data, so that each subsequent included jsp uses the same bean-directive? For example: DirectoryJSP <- DirectoryBean CompareJSP <- CompareBean Or should I use one bean for each jsp, so that each bean only holds the data for one jsp and its own purpose. For example: DirectoryJSP <- DirectoryBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean CompareJSP <- CompareBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean In the second case: should the subsequent beans be a member of the corresponding parent bean, so that only the parent bean is attached as attribute to the request? Or should each bean attached to the request?

    Read the article

  • Remote logging for multiple Apache virtual hosts using syslog-ng

    - by James
    I'm running a couple Apache web servers that each have 4-8 separate virtual hosts on each of them. I'm trying to setup a dedicated log server that stores each virtual host access and errors logs in a separate directory for that virtual host. For example on the logging server, /var/log/remove/10.0.0.2/virtualhost1 contains access_log and error_log /var/log/remove/10.0.0.2/virtualhost2 contains access_log and error_log /var/log/remove/10.0.0.3/virtualhost3 contains access_log and error_log and so on... Right now I have it split up by host but I can't figure out how to do it additionally by virtual host. Here are the relevant lines from the logging server's syslog-ng.conf source r_src { tcp(ip("0.0.0.0") port(5140)); }; destination r_all { file("/opt/splunk/logs/$HOST"); }; log { source(r_src); destination(r_all); }; Any help would be appreciated. Thanks!

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >