Search Results

Search found 22413 results on 897 pages for 'train main'.

Page 436/897 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • How to sync my PDA (outlook, no wifi) with my Google Calendar?

    - by Peter
    Hey Everybody, We have an old PDA that is not networked (Mio 168). It's main purpose is to serve as an agenda for which we use the outlook that came with it. Now, as my wife usually has the PDA and I want to be able to check and update our agenda too, I'm looking for a way to sync it with Google Calendar by hooking it to my PC via USB. I found a tool to sync outlook with Google Calendar. However, I would need outlook on my PC to be able to use that and I don't have outlook on my PC, nor do I want to buy it just for this sync. So, does anybody here now if and how I can sync my outlook on my PDA with Google Calendar without the go between of a PC version of outlook? Cheers.

    Read the article

  • WDS (Wireless Distribution System) not work

    - by xdevel2000
    I've a dlink di-524 as main router (192.168.0.1) connected to Internet and a second router (192.168.0.2), a tplink wr841n, with WDS enabled and correctly configured to "join" with dlink. After I connect via wireless a laptop (192.168.0.100) to tplink and both work good but with the laptop I can't go to Internet. It seems as WDS not work. With the laptop I can only ping the tplink but other ping (the dlink or other LAN computer not responding). What's the problem? Perhaps also the dlink must have the WDS option? Thanks.

    Read the article

  • My NTFS Partition keeps becoming "unusable" on Ubuntu, Any Ideas?

    - by gopherman
    I just purchased a new 2TB Drive External Seagate, My main system uses both Windows and Ubuntu So I am pretty much stuck with keeping my drive as NTFS. I have done this without any problems before but since I got this new drive I have been having issues. When I first load up Ubuntu the drive mounts and runs fine, after an unspecified amount of time i start getting Input/Output errors when accessing the drive. When I goto the Disk Utility I get a message stating the drive is "Unknown or Unused", If I disconnect and reconnect the drive or reboot everything is fine again. There's no errors coming up with S.M.A.R.T and it seems to work fine while under windows. Any thoughts?

    Read the article

  • My NTFS Partition keeps becoming "unusable" on Ubuntu, Any Ideas?

    - by gopherman
    I just purchased a new 2TB Drive External Seagate, My main system uses both Windows and Ubuntu So I am pretty much stuck with keeping my drive as NTFS. I have done this without any problems before but since I got this new drive I have been having issues. When I first load up Ubuntu the drive mounts and runs fine, after an unspecified amount of time i start getting Input/Output errors when accessing the drive. When I goto the Disk Utility I get a message stating the drive is "Unknown or Unused", If I disconnect and reconnect the drive or reboot everything is fine again. There's no errors coming up with S.M.A.R.T and it seems to work fine while under windows. Any thoughts?

    Read the article

  • How does a private intra network connect to the internet?

    - by user24454
    Yesterday I visited the offices of RailTel - a public company in India that provides communication backbone to the Indian Railways, they had a very sophisticated setup of Optical Fiber cables for data transmission. They said that this is a private network for internal use only. Then when I was in the Exchange Office - the main communication office, a place where they actually use those communication channels. They said that we could connect to the Intranet and as well as the Internet! My question is, that how is this possible? How can privately laid optical fibers connect globally? On google, I picked up the term internet exhange? But this has got me confused further, why would a private network want to go to this exchange? Please explain me in very simple terms, how does this all work? If this is just a connection of wires, then why charge so much for little bandwidth? Thanks.

    Read the article

  • Service Catalogs for Database as a Service

    - by B R Clouse
    At the end of last month, I had the opportunity to present a speaking session at Oracle OpenWorld: Database as a Service: Creating a Database Cloud Service Catalog.  The session was well-attended which would have surprised me several months ago when I started researching this topic.  At that time, I thought of service catalogs as something trivial which could be explained in a few simple slides.  But while looking at all the different options and approaches available, I came to learn that designing a succinct and effective catalog is not a trivial task, and mistakes can lead to confusion and unintended side effects.  And when the room filled up, my new point of view was confirmed. In case you missed the session, or were able to attend but would like more details, I've posted a white paper that covers the topics from the session, and more.  We start with an overview of the components of a service catalog: And then look at several customer case studies of service catalogs for DBaaS.  Synthesizing those examples, we summarize the main options for defining the service categories and their levels.  We end with a template for defining Bronze | Silver | Gold service tiers for Oracle Database Services. The paper is now available here - watch for updates as we work to expand some sections and incorporate readers' feedback (hint - that includes your feedback). Visit our OTN page for additional Database Cloud collateral.

    Read the article

  • github team workflow - to fork or not?

    - by aporat
    We're a small team of web developers currently using subversion but soon we're making a switch to github. I'm looking at different types of github workflows, and we're not sure if the whole forking concept in github for each developer is such a good idea for us. If we use forks, I understand each developer will have his own private remote & local repositories. I'm worried it will make pushing changesets hard and too complex. Also, my biggest concern is that it will force each developer to have 2 remotes: origin (which is the remote fork) and an upstream (which is used to "sync" changes from the main repository). Not sure if it's such a easy way to do things. This is similar to the workflow explained here: https://github.com/usm-data-analysis/usm-data-analysis.github.com/wiki/Git-workflow If we don't use forks, we can probably get by fine by using a central repo creating a branch for each task we're working on, and merge them into the development branch on the same repository. It means we won't be able to restrict merging of branches and might be a little messy to have many branches on the central repository. Any suggestions from teams who tried both workflow?

    Read the article

  • Will SSD degrade when running VMWare Workstation from SSD?

    - by Andrey Botalov
    My main OS (Windows 7 or 8) is runned from SSD. I'd want to run Mac OS X 10.7 or 10.8 using VMWare workstation. I've heard that VMWare doesn't support TRIM and other things to optimize SSD usage. So SSD will quickly degrade if VM will be runned from SSD. Will it be better to put guest OS's files (.vmdk and the rest) to external HDD (connected through USB 2 or 3) instead of SSD? What advantages and disadvantages it will give? What if VM will be put to internal HDD? At what drive type performance of VM will be better?

    Read the article

  • Dynamically changing one-node Cassandra cluster to two nodes

    - by Jason Axelson
    So I have an application that will be very dormant most of the time but will need high-bursting a few days out of the month. Since we are deploying on EC2 I would like to keep only one Cassandra server up most of the time and then on burst days I want to bring one more server up (with more RAM and CPU than the first) to help serve the load. What is the best way to do this? Should I take a different approach? Some notes about what I plan to do: Bring the node up and repair it immediately After the burst time is over decommission the powerful node Use the always-on server as the seed node My main question is how to get the nodes to share all the data since I want a replication factor of 2 (so both nodes have all the data) but that won't work while there is only one server. Should I bring up 2 extra servers instead of just one?

    Read the article

  • How to Program AWS Spot Instances to Strategically Bid So the Auction is Never Lost Until a Competitor Beats the Maximum I'm Willing to Pay?

    - by Taal
    I believe I'm in the right section of stack exchange to ask this. If not, let me know. I only use Amazon Web Services for temporary type hosting services, so the spot instances are quite valuable to me. I would also just make an instance and start and stop it - but - that doesn't necessarily fit my bootstrapped budget sadly. Anyways, it really kills me when someone outbids me on a spot instance I have (I tend to go for the larger ones which there are fewer of available) and I get randomly kicked off. I know or at least I believe there is a way to program in something somehow to dynamically change your bidding price to beat a potential competitor's if their's is higher than yours. Now, I previously believed Amazon would just charge me for the highest price right above the next lowest competitor automatically (eliminating the need for this) - so if I bid too high, then I only pay what I would of needed to in order to win and keep the auction. Essentially, I thought my bid price was my max bid price. Apparently, according to my bills and several experiments I've done - this is not the case. They charge me for whatever I bid even when I know there is no one else around to counter bid me. I needed to clarify that, but let me get back to the main point: Let's say I'm bidding $0.50, competitor comes in and bids 0.55 cents. I get kicked off. I want to have it to where I'd set a maximum I'm willing to pay (let's say $1.00 here), and then when competitor comes in and tries to bid $.55, my bid is dynamically adjusted to beat his at $0.56 up until he breaks my $1.00 threshold. I've been reading the guides and although they are more or less straightforward, I feel like they leave a few holes in them that end up confusing me. Like, for instance, where do I input said command or when do I do it? Maybe I'm just tech illiterate and need help deciphering these guides. A good start for someone willing to answer/help me decipher this problem would be here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-as-update-bid.html

    Read the article

  • Host a Debian repository on a Windows Web/Ftp server

    - by Dave
    At the risk of causing a matter vs. antimatter paradox that would end the world as we know it ... Is it possible to host a Debian repository on a Windows server? We have some applications which are available for Windows, Mac OS X, and Linux. Our web site, from where the application can be downloaded, is a Windows Server 2008 box running IIS 7. That is not going to change, and I would like to avoid having to purchase another server and/or domain. I would like to take advantage of the Debian packaging system so that I can just instruct users to add our repository to their software sources, and then they can install, get updates, resolve dependencies (some of which are not yet in the stable/main distributions of my target platforms), etc. The instructions I can find on the internet require linux-specific tools to create a local repository, but are unclear as to whether or not that can be copied to an FTP site as is, or if it requires some local daemons to be running or something.

    Read the article

  • Danger in running a proxy server? [closed]

    - by NessDan
    I currently have a home server that I'm using to learn more and more about servers. There's also the advantage of being able to run things like a Minecraft server (Yeah!). I recently installed and setup a proxy service known as Squid. The main reason was so that no matter where I was, I would be able to access sites without dealing with any network content filter (like at schools). I wanted to make this public but I had second thoughts on it. I thought last night that if people were using my proxy, couldn't they access illegal materials with it? What if someone used my proxy to download copyright material? Or launched an attack on another site via my proxy? What if someone actually looked up child pornography through the proxy? My question is, am I liable for what people use my proxy for? If someone does an illegal act and it leads to my proxy server, could I be held accountable for the actions done?

    Read the article

  • Motivation for a service layer (instead of just copying dlls)?

    - by BornToCode
    I'm creating an application which has 2 different UIs so I'm making it with a service layer which I understood is appropriate for such scenario. However I found myself just creating web methods for every single method I have in the BL layer, so the services basically built from methods that looks like this: return customers_bl.Get_Customer_Prices(customer_id); I understood that a main point of the service layer is to prevent duplication of code so I asked myself - why not just import the BL.DLL (and the dal.dll) to the other UI, and whenever making a change re-copy the dlls, it might not be so 'neat', but still less hassle than one more layer? {I know something is wrong in my approach, I'm probably missing the importance of service layer, I'd like to get more motivation to create another layer, especially because as it is I found that many of my BL functions ALREADY looks like: return customers_dal.Get_Customer_Prices(cust_id) which led me to ask: was it really necessary to create the BL just because on several functions I actually have LOGIC inside the BL?} so I'm looking for more motivation to creating ONE MORE layer, I'm sure it's not just to make it more convenient that I won't have to re-copy the dlls on changes? Am I grasping it wrong? Any simple guidelines on how to design service layer (corresponding to all the BL layer functions or not? any simple example?) any enlightenment on the subject?

    Read the article

  • Motivation for service layer (instead of just copying dlls)?

    - by BornToCode
    I'm creating an application which has 2 different UIs so I'm making it with a service layer which I understood is appropriate for such case. However I found myself just creating web methods for every single method I have in the BL layer, so the services basically built from methods that looks like this: return customers_bl.Get_Customer_Prices(customer_id); I understood that a main point of the service layer is to prevent duplication of code so I asked myself - well, why not just import the BL.dll (and the DAL.dll) to the other UI, and whenever making a change re-copy the dll files, it might not be so 'neat', but is the all purpose of the service layer to prevent this? {I know something is wrong in my approach, I'm probably missing the importance of service layer, I'd like to get more motivation to create another layer, especially because as it is I found that many of my BL functions ALREADY looks like: return customers_dal.Get_Customer_Prices(cust_id) which led me to ask: was it really necessary to create the BL just because on several functions I actually have LOGIC inside the BL?} so I'm looking for more motivation to creating ONE MORE layer, I'm sure it's not just to make it more convenient that I won't have to re-copy the dlls on changes? Am I grasping it wrong? Any simple guidelines on how to design service layer (corresponding to all the BL layer functions or not? any simple example?) any enlightenment on the subject?

    Read the article

  • Should I use subdomains or subfolders for my user groups?

    - by bilygates
    Hello, I run a photography website where each user has its own subdomain (i.e. user.site.com). I'm thinking of adding user groups but I'm unable to decide if I should also associate a separate subdomain or simply a subfolder for each group: Subfolders (www.site.com/groups/my-group) Pros: Easier to maintain from a tehnical p.o.v. Cons: Harder to memorize. The URLs can get really long (www.site.com/groups/my-group/albums/my-album/) Subdomains (my-group.site.com) Pros: Easier to memorize. Shorter URLs. One might have the impression that such an URL is somewhat more "independent" from the main site. Cons: Group and user names belong to the same name space, so we need to check for collisions when creating a new user/group. One cannot determine the content of the page by only reading the URL: Is x.site.com a user page or a group page? What's your opinion on the matter? I should note that DeviantArt.com uses the 2nd option (that's where I got the idea). Thank you in advance!

    Read the article

  • Printer Canon MP540 succesfully added finally but doesn't print? (Attached Debug log)

    - by NES
    i tried to setup my printer in Ubuntu 10.10. I had to use special guide to install because Canon used libcupsys2 in its packages and ubuntu expects libscupsys i followed this guide in reference to this advice. The first problem was this one (the ubuntu "add printer dialog asked for root credentials". The suggested workarout in Launchpad to create a root password worked. Now i added the printer. It's available for printing. Then i got an error message "cups insecure filter" which prevented me from printing. That could be solved by setting the need root rights in the /usr/lib/cups/filter/ directory. The error message disappeared after restarting cups service. Now it should work but it doesn't. The main problem is now, the printer seems to be proper setup but when i try to print a document, the printer icon appears for short time in gnomepanel. There's a printing job in Queue which got completed, but the printer doesn't print. I attached the Debuglog provided by printer error control, had to upload to another site, since it was to big in the question body here. Perhaps someone can identify the problem with it? Note: i know that it once worked fine with an older release of Ubuntu, but not sure which version this was.

    Read the article

  • Running a VM off of an external HD via USB

    - by Nelson LaQuet
    Is it viable to run (i.e. reference the vmx/vhd directly from the mounted drive) a VM (vmware running Windows Seven) off of an external HD via USB? I mean, I know it's possible, but I guess I'm asking if USB provides enough bandwidth for normal usage... If so, are there any particular brands that may be better or worse? I know that ESATA would be a more viable setup, but my laptop doesn't have an ESATA port. Currently I use the VM to segregate all of my work development servers and software from my main machine; so I will be running all development servers and tools on the VM directly.

    Read the article

  • SQLite DB borked when opened on a different machine

    - by pruefsumme
    Hello, I'm using SQLite to store some data. The primary database is on a NAS (Debian Lenny, 2.6.15, armv4l) since the NAS runs a script which updates the data every day. A typical "select * from tableX" looks like this: 2010-12-28|20|62.09|25170.0 2010-12-28|21|49.28|23305.7 2010-12-28|22|48.51|22051.1 2010-12-28|23|47.17|21809.9 When I copy the DB to my main computer (Mac OS X) and run the same SQL query, the output is: 2010-12-28|20|1.08115035175016e-160|25170.0 2010-12-28|21|2.39343503830763e-259|-9.25596535779558e+61 2010-12-28|22|-1.02951149572792e-86|1.90359837597183e+185 2010-12-28|23|-1.10707273937033e-234|-2.35343828462275e-185 The 3rd and 4th column have the type REAL. Interesting fact: When the numbers are integer (i.e. they end with ".0"), there is no difference between the two databases. In all other cases, the differences are ... hm ... surprising? I can't seem to find a pattern. If someone's got a clue - please share!

    Read the article

  • Software solution from the 2000's, should I attempt to patch or remake the whole thing?

    - by ShadowScripter
    I was sent out to discuss a system that a certain company is currently using and what should be done with it. The company manufactures various carton displays. This system was developed to keep track of clients, orders and prices. Lots have happened since the system was created and the system is now, as the manager described it, "locked up" and "problematic", which I translate as "not dynamic" and "unstable". Some info about the system It was developed around the year 2000 Fairly small system, 2-5 users, 6 forms, ~8 tables with average quantities of data Built on early Visual Basic, forms created with the drag and drop design. Interface is basically just a window with a menu and some forms Uses MSSQL database (SQL2005 server) to store data and ODBC driver to query, data was migrated from excel before this system, and before excel it was handled, calculated and written by hand and paper Users work in Microsoft XP environment (and up) Their main problem is that they can't adjust and calculate prices, can't add new carton types etc, correctly anymore because they can't (or rather, they don't know how to) touch the data on the server. I suggested 3 possible solutions Attempt to patch the current system Create a fresh new interface (preferably similar environment, VB.net or VB based) Bring it back to an Excel solution, considering it is such a small system There might be more options, but these are the ones I could think of. My questions are What should I recommend and why? What is or could be the pros and cons of these alternatives? Are there other (possibly better) alternatives?

    Read the article

  • dual-boot does not work

    - by elyashiv
    I have a PC with linux-mint installed on it. I wanted to install win-7 along-side, for some reasones. what I have done is: create a bootable USB-stick with Ubontu ios. restarted the computer, this time with Ubuntu (running from my disk-on-key). created a partition on the main HD using GPart. formated the partition to NTFS. restarted the computer, this time through the installation CD for win-7. installed win-7 with normal settings. that all worked, and I'm writing this through win-7. the thing is - when I boot my system, I don't get to choose what OS to run. I checked the settings in msconfig, and in boot label it has just win-7. how can I boot linux?

    Read the article

  • Updating entities in response to collisions - should this be in the collision-detection class or in the entity-updater class?

    - by Prog
    In a game I'm working on, there's a class responsible for collision detection. It's method detectCollisions(List<Entity> entities) is called from the main gameloop. The code to update the entities (i.e. where the entities 'act': update their positions, invoke AI, etc) is in a different class, in the method updateEntities(List<Entity> entities). Also called from the gameloop, after the collision detection. When there's a collision between two entities, usually something needs to be done. For example, zero the velocity of both entities in the collision, or kill one of the entities. It would be easy to have this code in the CollisionDetector class. E.g. in psuedocode: for(Entity entityA in entities){ for(Entity entityB in entities){ if(collision(entityA, entityB)){ if(entityA instanceof Robot && entityB instanceof Robot){ entityA.setVelocity(0,0); entityB.setVelocity(0,0); } if(entityA instanceof Missile || entityB instanceof Missile){ entityA.die(); entityB.die(); } } } } However, I'm not sure if updating the state of entities in response to collision should be the job of CollisionDetector. Maybe it should be the job of EntityUpdater, which runs after the collision detection in the gameloop. Is it okay to have the code responding to collisions in the collision detection system? Or should the collision detection class only detect collisions, report them to some other class and have that class affect the state of the entities?

    Read the article

  • How to prevent firefox from updating

    - by Larry
    I have firefox 3.6.x in Mint Linux. For physical problems -- eyes and color, I want to keep this version and not do any updates ever to it. I don't know if it is a firefox or linux issue, but nothing seems to work. It should not be rocket scientist enabled. :P This is what I have tried: Update Manager and added to ignore package --firef0x* Fully removed all traces of firefox and reinstalled 3.6.28 In firefox, set the upgrade options (3) of them to unselected and saved. Scanned computer for firef* to insure all traces were removed, to include /usr/bin/firefox. Using Mint Linux 9, otherwise fully upgraded for packages. That's the main things I've done. My major issues is the later versions of that software are almost impossible for me to see.

    Read the article

  • How to build an API on top of an existing Rails app with NodeJs and what architecture to use?

    - by javiayala
    The explanation I was recently hired by a company that has an old RoR 2.3 application with more than 100k users, a strong SEO strategy with more than 170k indexed urls, native android and ios applications and other custom-made mobile and web applications that rely on a not so good API from the same RoR app. They recently merged with a company from another country as an strategy to grow the business and the profit. They have almost the same stats, a similar strategy and mobile apps. We have just decided that we need to merge the data from both companies and to start a new app from scratch since the RoR app is to old and heavily patched and the app from the other company was built with a custom PHP framework without any documentation. The only good news is that both databases are in MySQL and have a similar structure. The challenge I need to build a new version that: can handle a lot of traffic, preserves the SEO strategies of both companies, serve 2 different domains, and have a strong API that can support legacy mobile apps from both companies and be ready for a new set of native apps. I want to use RoR 3.2 for the main web apps and NodeJs with a Restful API. I know that I need to be very careful with the mobile apps and handle multiple versions of the API. I also think that I need to create a service that can handle a lot IO request since the apps is heavily used to create orders for restaurants at a certain time of the day. The questions With all this in mind: What type of architecture do you recommend me to follow? What gems or node packages do you think will work the best? How do I build a new rails app and keep using the same database structure? Should I use NodeJS to build an API or just build a new service with Ruby? I know that I'm asking to much from you guys, but please help me by answering any topic that you can or by pointing me on the right direction. All your comments and feedback will be extremely appreciated! Thanks!

    Read the article

  • Creating deterministic key pairs in javascript for use in encrypting/decrypting/signing messages

    - by SlickTheNick
    So I have been searching everywhere and havn't been able to find anything with the sufficient information I need.. so Im a bit stumped on this one at the moment What I am trying to do is create a public/private key pair (like PGP) upon a users account creation, based on their passphrase and a random seed. The public key would be saved on the server, and ideally the private key would never be seen by the server whatsoever. The user could then sign in, and send a message to another user. Before the message is sent, the senders key pair would be re-generated on the fly based on their credentials (and maybe a password prompt) and used to encrypt the message. The receiver would then use their own re-generated private key to decrypt said message. The server itself should never see any plaintext passwords, private keys or readable messages. Bit unsure how on how I could go about implementing this. Iv been looking into PGP, specifically openPGP.js. The main trouble I am having is being able to regenerate the key-pair based off a specific seed. PGP seems to have a random output even if the inputs are the same. Storing the private key in a cookie or in HTML5 storage or something also isnt really an option, too unreliable. Can anyone point me in the right direction?

    Read the article

  • WINAPI beginner guidance question

    - by gekod
    I'm learning to develop windows applications using WINAPI and plain C. Now I got a bit confused with all those handles and would like to ask if you guys could teach me some good practices to structure and handle controls and windows. Here's where I get confused: Using the IDs declared in the resources for each object, we can get their handles using GetDlgItem(). Now what if we don't know their parent, which is needed by this function. One example: We have the main window created at launch. Then we register two new window classes and create a window for each new class and we create a message function for each too. Now if inside one of the children windows I create a button and inside the other child window I create a text label. Now when we click the button inside of child window A the label in child window B shall be modified to whatever. The WM_COMMAND for the button is interpreted inside the message loop for child window A. Now what would be the best and more elegant way to access the text label inside the child window B? I am in the process of learning the WINAPI and just want to learn it right from the start instead of producing Hacked code that someday becomes unreadable and to later have to adapt to a new way of programing.

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >