Search Results

Search found 21727 results on 870 pages for 'google visualization'.

Page 648/870 | < Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >

  • Automatically detect faces in a picture

    - by abel
    At my work place, passport sized photographs are scanned together, then cut up into individual pictures and saved with unique file numbers. Currently we use Paint.net to manually select, cut and save the pictures. I have seen Sony's Cybershot Camera has face detection. Google also gives me something about iphoto when searching for face detection. Picasa has facedetection too. Are there any ways to autodetect the faces in a document, which would improve productivity at my workplace by reducing the time needed to cut up individual images. Sample Scanned Document(A real document has 5 rows of 4 images each=20 pics): (from: http://www.memorykeeperphoto.com/images/passport_photo.jpg, fairuse) For eg. In Picasa 3.8, On clicking View People, all the faces are shown and I am asked to name them, can I save these individual pictures automatically with the names as different pictures.

    Read the article

  • How can I fix my WRT54GL's constant crashing?

    - by Aarthi
    I have a Linksys WRT54GL wireless router (the old blue-and-black) whose underside indicates it is Version 2. I've noticed that, on wireless mode, if I am on a Skype call or in a Google Hangout, the wireless aspect will crash completely. In addition, if I am connected via an ethernet cord, my quality (that is, how my voice is received) tanks very quickly. I suspect this is due, in part, to my internet connectivity itself (I'm on Comcast instead of Verizon FiOS, as I'd prefer) but I'd like to stop my wireless router's wireless capability from crashing. I considered a firmware upgrade, but it looks to me as if I am upgraded. Short of manually running ethernet all over my house, I'm not sure what to do. How can I solve my wireless router's issues? If the answer is "buy a new router," then that's valid, as well, in my opinion.

    Read the article

  • What are the hard and fast rules for Cache Control?

    - by Metalshark
    Confession: sites I maintain have different rules for Cache Control mostly based on the default configuration of the server followed up with recommendations from the Page Speed & Y-Slow Firefox plug-ins and the Network Resources view in Google's Speed Tracer. Cache-Control is set to private/public depending on what they say to do, ETag's/Last-Modified headers are only tinkered with if Y-Slow suggests there is something wrong and Vary-Accept-Encoding seems necessary when manually gziping files for Amazon CloudFront. When reading through the material on the different options and what they do there seems to be conflicting information, rules for broken proxies and cargo cult configurations. Any of the official information provided by the analysis tools mentioned above is quite inaccessible as it deals with each topic individually instead of as a unified strategy (so there is no cross-referencing of techniques). For example, it seems to make no sense that the speed analysis tools rate a site with ETag's the same as a site without them if they are meant to help with caching. What are the hard and fast rules for a platform agnostic Cache Control strategy? EDIT: A link through Jeff Atwood's article explains Caching in superb depth. For the record though here are the hard and fast rules: If the file is Compressed using GZIP, etc - use "cache-control: private" as a proxy may return the compressed version to a client that does not support it (the browser cache will hold files marked this way though). Also remember to include a "Vary: Accept-Encoding" to say that it is compressible. Use Last-Modified in conjunction with ETag - belt and braces usage provides both validators, whilst ETag is based on file contents instead of modification time alone, using both covers all bases. NOTE: AOL's PageTest has a carte blanche approach against ETags for some reason. If you are using Apache on more than one server to host the same content then remove the implicitly declared inode from ETags by excluding it from the FileETag directive (i.e. "FileETag MTime Size") unless you are genuinely using the same live filesystem. Use "cache-control: public" wherever you can - this means that proxy servers (and the browser cache) will return your content even if the rest of the page needs HTTP authentication, etc.

    Read the article

  • Multilingual website without language component in the URL

    - by user359650
    I'm working on a website for Canada which will have French and English versions. For SEO purposes, I would like to avoid using any language tag in URLs because I believe it will have more impact (e.g. example.ca/products better than en.example.ca/products or example.ca/en/products). I believe this is technically possible because the2 languages are sufficiently different that the URLs won't be conflicting with one another (e.g. if you want a "product" page, it will be /products in English, and /produits in French so you know which language the URL is about). Since Google (and most likely others) doesn't rely on the URL (nor HTML tags) to determine the content language I don't see any problems with search engines. To make this possible I've thought about using a cookie distinct from the session cookie (e.g. example.org_language) with long term expiry (e.g. N years) that will memorize the language chosen by the user. That way when people visit the website with a new browser session, they get served the proper language. I have already given up on users being able to switch one page from English to French: when people will chose English or French from the menu they will be redirected to the corresponding version of the home page. Do you foresee any problems with not using a language component in the URL (whether domain or path)? (as long as one makes sure URLS don't conflict).

    Read the article

  • How do I configure Ruby On Rails on windows XP with APACHE and MYSQL

    - by Gaurav Sharma
    Hello Everyone, It has been quite some time I am struggling to get Ruby On Rails working on my System which is having Windows XP operating system. I am trying to configure ROR to use apache and mysql so that I do not have to install additional servers to run ruby on rails. I also tried InstantRails but faced same problems. I went through the tutorial mentioned in getting rails to wrok on a windows machine running xampp and did all the steps which were necessary. All went fine (installing rails, running the ruby, gem and rails command from command prompt) but when I tried to run my application by typing localhost:3000/say/hello nothing happened and I was redirected to the google page for searching to this keyword. Please help me Thanks

    Read the article

  • What is your most preferred method of site pagination?

    - by John Smith
    There seem to be quite a few implementations of this feature. Some sites like like Stackexchange have it laid out like this: [1][2][3][4][5] ... [954][Next] Other sites like game forums may have something like this: [1][2][3] ... [10] ... [50] ... [500] ... [954][Next] Some sites like webcomics (XKCD comes to mind) have it laid out like this: [Last][Prev][Random][Next][First] Reddit has a very simple pagination with only: [Prev][Next] Sites like Stackexchange and Google also allow you to change how many results you want per page. Personally, I have never used this feature. Is it even worth including or does it just further confuse the design with needless features? Personally, I have only ever seen the need for the webcomic style (without the random). If I need to go to a specific page (which is very, very rare) then I can just edit the address bar. Is it good design to make something more complex for rare occasions where it might make save the user some time? Is having to edit the address bar to navigate the site effectively in some circumstances bad design?

    Read the article

  • Backup script that excludes large files using Duplicity and Amazon S3

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. My script gives the proper command, but when run within the script it outputs an an error. However if the same command is run manually everything works...??? Here is the script based on one easy found with google #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="7743E14E" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= If run I recieve the error; Command line error: Expected 2 args, got 6 Enter 'duplicity --help' for help screen. Any help your could offer would be greatly appreciated.

    Read the article

  • Bacula vs. BackupPC [closed]

    - by ujjain
    I have been googling about the differences between them. Bacula has lots of roles BackupPC is easier to configure Bacula works with agent, not rsync (great for Windows backups) It seems that Bacula is most often compared to Amanda though, while BackupPC seems a perfectly lovely and popular backup distribution to. I currently backup my servers with rsnapshot, but I am looking for a professional scalable solution that could also back-up 50 hosts without problems. Preferably a solution that can offer bare metal restores for my Linux servers. I am not looking to reinstall the exact same version of Plesk, the software, etc... Update: I see this ranks high in Google, I found a good article: http://www.serverfocus.org/backuppc-vs-bacula-vs-amanda. I personally think that BackupPC is good for smaller environment, but Bacula, despite the high learning curve, is better for environments that requilre scaling.

    Read the article

  • django & postgres linux hosting (with SSH access) recommendations

    - by Justin Grant
    We're looking for a good place to host our custom Django app (a fork of OSQA) and its postgresql backend. Requirements include: Linux Python 2.6 or (ideally) Python 2.7 Django 1.2 Postgres 8.4 or later DB backup/restore handled by the hoster, not us OS & dev-platform-stack patching/maintenance handled by the hoster, not us SSH access (so we can pull source code from GitHub, so we can install python eggs, etc.) ability to set up cron jobs (e.g. to send out dail email updates) ability to send up to 10K emails/day good performance (not ganged up with a zillion other sites on one CPU, not starved for RAM) FTP or SCP access to web logs dedicated public IP SSL support Costs under $1000/month for a relatively small site (<5M pageviews/month) Good customer service We already have a prototype site running on EC2 on top of a Bitnami DjangoStack. The problem is that we have to patch the OS, patch postgres, etc. We'd really prefer a platform-as-a-service (PaaS) offering, like Heroku offers for Rails apps, where all we need to worry about is deploying our code instead of worrying about system software patching and maintenance. Google App Engine is closest to what we're looking for, but they don't offer relational DB access (not yet at least). Anyone have a recommendation?

    Read the article

  • Running multiple box2D world objects on a server

    - by CharbelAbdo
    I'm creating a multiplayer game using LibGdx (with Box2d) and Kryonet. Since this is the first time I work on multiplayer games, I read a bit about server - client implementations, and it turns out that the server should handle important tasks like collision detection, hits, characters dying etc... Based on some articles (like the excellent Gabriel Gambetta Fast paced multiplayer series), I also know that the client should work in parallel to avoid the lag while the server responds to commands. Physics wise, each game will have 2 players, and any projectiles fired. What I'm thinking of doing is the following: Create a physics world on the client When the game is signaled to start, I create the same physics world on the server (without any rendering obviously). Whenever the player issues a command (move or fire), I send the command to the server and immediately start processing it on the client. When the server receives the command, it applies it on the server's world (set velocity etc...) Each 100ms, the server sends the new state to the client which corrects what was calculated locally. Any critical action (hit, death, level up) is calculated only on the server and sent to the client. Essentially, I would have a Box2d World object running on the server for each game in progress, in sync with the worlds running on the clients. The alternative would be to do my own calculations on the server instead of relying on Box2D to do them for me, but I'm trying to avoid that. My question is: Is it wise to have, for example, 1000 instances of the World object running and executing steps on the server? Tomcat used around 750 MBytes of memory when trying it without any object added to the world. Anybody tried that before? If not, is there any alternative? Google did not help me, are there any guidelines to use when you want to have physics on both the client and the server? Thanks for any help.

    Read the article

  • How do I install iTunes?

    - by David
    I have an iPhone and run Ubuntu on all of my personal computers. Since I did not want to keep a separate partition with Windows on it for the sole purpose of running iTunes, I attempted to install It using Wine. I installed Wine 1.4 from the Software Center and installed iTunes 10.6.3. When I tried to run it I got a slew of error messages. I hopped over to google where it was suggested that I install it through PlayOnLinux. I did so with the same result. Further googling revealed that iTunes 10.6.x is confirmed to work with Wine 1.5.1 and up. I installed Wine 1.5.1 following the instructions I found and was unable to get it to open. I did the same with 1.5.9 with the same results. I opened the Package Manager and installed the Wine 1.5.9 packages through it, and it appears to have installed properly. When trying to install iTunes I got he error "This iTunes installer requires Windows Vista 64 bit or later". Realizing that Wine uses XP as a default I ran winecfg and changed it to Windows 7. This changed nothing and I tried changing it through winetricks to no avail. I even changed it to Vista with the same results. Does anyone know what is going wrong here and how to fix it? Thanks

    Read the article

  • Internet connection very slow after Linksys configuration

    - by NLV
    Hello We have this network setup Server1 - DHCP server, Domain Controller, AD Lease line for Internet connection From lease line to Linksys router (we dont use wireless though) From linksys to Netgear (24 port Switch) and vonage (VoIP) Netgear to all our machines We configured Linsys with the static IP and DNS server addresses our ISP gave and we have routed it correctly. All our work machines are configured with Get IP automatically DNS server addresses our ISP gave The problem is that none of the sites are getting opened promptly. It is taking around 5 minutes to load google.com. But we are able to ping all the sites. What could be the problem?

    Read the article

  • dependency hell

    - by Delirium tremens
    I'm trying to install empathy. Current version has to be installed from source, but needs a list of things that have to be installed one by one. Previous version is in repository, but blinks (opens, then right after that, closes). Previous version of the previous version: apt-cache search -showpkg empathy shows general empathy information and a telepathy too, but not the rpm file name taking the rpm file name from a Google search result, apt-get install package=empathy-2.30.1-2pclos2010 says package package (twice, really) not found installing apturl, clicking the rpm file link, opening it with apturl, installation gui starts, but fails opening the rpm file with Synaptic doesn't work opening the rpm file with /usr/bin/apt-get doesn't work What now?

    Read the article

  • Python or Ruby in 2011.

    - by Sleeper Smith
    What I'm really asking is, in the current services and technologies provided, which is a more "useful" language? Which one has more opportunity? Some background info first. I'm a .net C# dev for 5 years. Having done a few projects on Amazon AWS, I'm looking to start a few projects of my own. But Azure's too expensive, and AWS has too much management overhead. My current choice is Google App Engine and Python. Logical enough. But what I want to ask here is this: In Linux world, which is more useful? Recently heard about Heroku for Ruby. How viable is this? Looking at the pricing model indicates that it's more expensive. Which one has more up-to-date and exciting open source projects? For instance Trac is just plain out dated compared to Redmine. One of the big reason pulling me for Ruby is Redmine. Implementations? IronPython/IronRuby/JRuby etc etc. Which one is more standardised and more implementation agnostic? Which one is easier to port between Windows/Linux? Anyway, your input and thoughts are greatly appreciated. thanks.

    Read the article

  • Software development is (mostly) a trade, and what to do about it

    - by Jeff
    (This is another cross-post from my personal blog. I don’t even remember when I first started to write it, but I feel like my opinion is well enough baked to share.) I've been sitting on this for a long time, particularly as my opinion has changed dramatically over the last few years. That I've encountered more crappy code than maintainable, quality code in my career as a software developer only reinforces what I'm about to say. Software development is just a trade for most, and not a huge academic endeavor. For those of you with computer science degrees readying your pitchforks and collecting your algorithm interview questions, let me explain. This is not an assault on your way of life, and if you've been around, you know I'm right about the quality problem. You also know the HR problem is very real, or we wouldn't be paying top dollar for mediocre developers and importing people from all over the world to fill the jobs we can't fill. I'm going to try and outline what I see as some of the problems, and hopefully offer my views on how to address them. The recruiting problem I think a lot of companies are doing it wrong. Over the years, I've had two kinds of interview experiences. The first, and right, kind of experience involves talking about real life achievements, followed by some variation on white boarding in pseudo-code, drafting some basic system architecture, or even sitting down at a comprooder and pecking out some basic code to tackle a real problem. I can honestly say that I've had a job offer for every interview like this, save for one, because the task was to debug something and they didn't like me asking where to look ("everyone else in the company died in a plane crash"). The other interview experience, the wrong one, involves the classic torture test designed to make the candidate feel stupid and do things they never have, and never will do in their job. First they will question you about obscure academic material you've never seen, or don't care to remember. Then they'll ask you to white board some ridiculous algorithm involving prime numbers or some kind of string manipulation no one would ever do. In fact, if you had to do something like this, you'd Google for a solution instead of waste time on a solved problem. Some will tell you that the academic gauntlet interview is useful to see how people respond to pressure, how they engage in complex logic, etc. That might be true, unless of course you have someone who brushed up on the solutions to the silly puzzles, and they're playing you. But here's the real reason why the second experience is wrong: You're evaluating for things that aren't the job. These might have been useful tactics when you had to hire people to write machine language or C++, but in a world dominated by managed code in C#, or Java, people aren't managing memory or trying to be smarter than the compilers. They're using well known design patterns and techniques to deliver software. More to the point, these puzzle gauntlets don't evaluate things that really matter. They don't get into code design, issues of loose coupling and testability, knowledge of the basics around HTTP, or anything else that relates to building supportable and maintainable software. The first situation, involving real life problems, gives you an immediate idea of how the candidate will work out. One of my favorite experiences as an interviewee was with a guy who literally brought his work from that day and asked me how to deal with his problem. I had to demonstrate how I would design a class, make sure the unit testing coverage was solid, etc. I worked at that company for two years. So stop looking for algorithm puzzle crunchers, because a guy who can crush a Fibonacci sequence might also be a guy who writes a class with 5,000 lines of untestable code. Fashion your interview process on ways to reveal a developer who can write supportable and maintainable code. I would even go so far as to let them use the Google. If they want to cut-and-paste code, pass on them, but if they're looking for context or straight class references, hire them, because they're going to be life-long learners. The contractor problem I doubt anyone has ever worked in a place where contractors weren't used. The use of contractors seems like an obvious way to control costs. You can hire someone for just as long as you need them and then let them go. You can even give them the work that no one else wants to do. In practice, most places I've worked have retained and budgeted for the contractor year-round, meaning that the $90+ per hour they're paying (of which half goes to the person) would have been better spent on a full-time person with a $100k salary and benefits. But it's not even the cost that is an issue. It's the quality of work delivered. The accountability of a contractor is totally transient. They only need to deliver for as long as you keep them around, and chances are they'll never again touch the code. There's no incentive for them to get things right, there's little incentive to understand your system or learn anything. At the risk of making an unfair generalization, craftsmanship doesn't matter to most contractors. The education problem I don't know what they teach in college CS courses. I've believed for most of my adult life that a college degree was an essential part of being successful. Of course I would hold that bias, since I did it, and have the paper to show for it in a box somewhere in the basement. My first clue that maybe this wasn't a fully qualified opinion comes from the fact that I double-majored in journalism and radio/TV, not computer science. Eventually I worked with people who skipped college entirely, many of them at Microsoft. Then I worked with people who had a masters degree who sucked at writing code, next to the high school diploma types that rock it every day. I still think there's a lot to be said for the social development of someone who has the on-campus experience, but for software developers, college might not matter. As I mentioned before, most of us are not writing compilers, and we never will. It's actually surprising to find how many people are self-taught in the art of software development, and that should reveal some interesting truths about how we learn. The first truth is that we learn largely out of necessity. There's something that we want to achieve, so we do what I call just-in-time learning to meet those goals. We acquire knowledge when we need it. So what about the gaps in our knowledge? That's where the most valuable education occurs, via our mentors. They're the people we work next to and the people who write blogs. They are critical to our professional development. They don't need to be an encyclopedia of jargon, but they understand the craft. Even at this stage of my career, I probably can't tell you what SOLID stands for, but you can bet that I practice the principles behind that acronym every day. That comes from experience, augmented by my peers. I'm hell bent on passing that experience to others. Process issues If you're a manager type and don't do much in the way of writing code these days (shame on you for not messing around at least), then your job is to isolate your tradespeople from nonsense, while bringing your business into the realm of modern software development. That doesn't mean you slap up a white board with sticky notes and start calling yourself agile, it means getting all of your stakeholders to understand that frequent delivery of quality software is the best way to deal with change and evolving expectations. It also means that you have to play technical overlord to make sure the education and quality issues are dealt with. That's why I make the crack about sticky notes, because without the right technique being practiced among your code monkeys, you're just a guy with sticky notes. You're asking your business to accept frequent and iterative delivery, now make sure that the folks writing the code can handle the same thing. This means unit testing, the right instrumentation, integration tests, automated builds and deployments... all of the stuff that makes it easy to see when change breaks stuff. The prognosis I strongly believe that education is the most important part of what we do. I'm encouraged by things like The Starter League, and it's the kind of thing I'd love to see more of. I would go as far as to say I'd love to start something like this internally at an existing company. Most of all though, I can't emphasize enough how important it is that we mentor each other and share our knowledge. If you have people on your staff who don't want to learn, fire them. Seriously, get rid of them. A few months working with someone really good, who understands the craftsmanship required to build supportable and maintainable code, will change that person forever and increase their value immeasurably.

    Read the article

  • Making a perfect map (not tile-based)

    - by Sri Harsha Chilakapati
    I would like to make a map system as in the GameMaker and the latest code is here. I've searched a lot in google and all of them resulted in tutorials about tile-maps. As tile maps do not fit for every type of game and GameMaker uses tiles for a different purpose, I want to make a "Sprite Based" map. The major problem I had experienced was collision detection being slow for large maps. So I wrote a QuadTree class here and the collision detection is fine upto 50000 objects in the map without PixelPerfect collision detection and 30000 objects with PixelPerferct collisions enabled. Now I need to implement the method "isObjectCollisionFree(float x, float y, boolean solid, GObject obj)". The existing implementation is becoming slow in Platformer games and I need suggestions on improvement. The current Implementation: /** * Checks if a specific position is collision free in the map. * * @param x The x-position of the object * @param y The y-position of the object * @param solid Whether to check only for solid object * @param object The object ( used for width and height ) * @return True if no-collision and false if it collides. */ public static boolean isObjectCollisionFree(float x, float y, boolean solid, GObject object){ boolean bool = true; Rectangle bounds = new Rectangle(Math.round(x), Math.round(y), object.getWidth(), object.getHeight()); ArrayList<GObject> collidables = quad.retrieve(bounds); for (int i=0; i<collidables.size(); i++){ GObject obj = collidables.get(i); if (obj.isSolid()==solid && obj != object){ if (obj.isAlive()){ if (bounds.intersects(obj.getBounds())){ bool = false; if (Global.USE_PIXELPERFECT_COLLISION){ bool = !GUtil.isPixelPerfectCollision(x, y, object.getAnimation().getBufferedImage(), obj.getX(), obj.getY(), obj.getAnimation().getBufferedImage()); } break; } } } } return bool; } Thanks.

    Read the article

  • Why times elapsed connecting to a server are different?

    - by user1634619
    I have a small program which connects to a server of my choice and measures the time elapsed to do so. Each time I run it it returns different result. My question is what does this time depend on ? Network congestion for one. If I choose a server that has multiple addresses e.g. google.com the length of physical link may differ from time to time ? Is it safe to assume that it also affects connection time ? Are there any other factors in place ?

    Read the article

  • Time tracking tool for monitoring application usage

    - by wizlog
    I want to know how I'm really using my computer, and where the time goes (eg. I have an English paper due, and I intend on getting it done, its 2:30 PM... no wait, its 8:30 PM...). What software can tell me- a. what programs I use, and when b. within programs like Google Chrome or Firefox, which tabs do I spend the most time on. (So I know if I'm spending the time playing a game, or watching a movie on Hulu...)

    Read the article

  • Prevent Firefox from going to first search result

    - by Dejan
    When I type some terms in the address bar (not the search box!) and hit enter, Firefox searches for those terms on Google, and, depending on some logic either takes me to the search results page or takes me to the first search result. Now, I want it to always take me to the search results page (like Chrome does). Is this possible? And, yes, I am aware that the search box does exactly that, but I'm using it for some other search engine. So, another solution would be to add additional search box, that can also work for me.

    Read the article

  • Cost effective way to provide static media content

    - by james
    I'd like to be able to deliver around 50MB of static content, either in about 30 individual files up to 10MB or grouped into 3 compressed files, around 5k to 20k times a day. Ideally I'd like to put some sort of very basic security around providing the data to ensure that a request is from the expected source, but if tossing the security for a big reduction in price is possible then it's an option. Does anyone have any suggestions other than what I've found: Google AppEngine is $0.12/GB & I believe has a file size limit of 10MB so I'd have to break the data up a bit. So a rough calculation would seem to be that this would cost me about $30 to $120 a day. Or I've seen something like what seems to be just public static content delivery with no type of logic capabilities like Usenet.nl at what I think calculates to about $0.025/GB which would cost me about $6 to $25 a day. Any idea if I'm going about these calculations right & if there might be a better option for just static content on a decently high volume delivery? Again some basic security would be great but if cost is greatly reduced without it then I'm up for that.

    Read the article

  • Why does my Canon printer print document pages at ~25% size?

    - by Erlend Alvestad
    I'm using a Canon PIXMA MP250, and I'm running 12.04 LTS. The printer's been working fine for the couple of months I've been a Linux user. That is, until today. I just printed a 1-page ODT document from LibreOffice. Instead of filling the sheet, the document occupies only a little less than 25% of the paper, in the top left corner, and the text has also shrunk to something like 5pt. I looked at the paper format settings for the document and printer, which were set to "letter". I changed these to "A4", hoping that would solve the issue. There was no change, however. I tried printing a different document in LibreOffice and got the same result. I tried exporting the original document to PDF and printing it through Document Viewer. Same result. I then printed a web page from Google Chrome. No formatting problems there. In all cases the print preview looks fine.

    Read the article

  • VPN Connected, How to browse files? Windows Vista

    - by Wbdvlpr
    I am trying to establish a VPN connection to a server in my office from my laptop at home. I tried some of the steps as mentioned here: Connect to a network Connect to a workplace Use my Internet Connection (VPN) Then type server IP address and then my username & password. After creating a VPN connection, I can see I am connected to it. Now I want to browse files on the server. But I have no clue where I should look for them. I was thinking more of a simple step, like, Windows Run > Type ip address > \\124.345.678.900, then a prompt asking username and password, and finally a window opens to view the files. I tried to google it, but still unable to view files. Please help. Update: I didn't mention that when I try to connect to server via \\124.345.678.900 I get 0x80070043 error message.

    Read the article

  • Outgoing mail from linux not being delivered

    - by Jason
    I can't seem to send mail through my php scripts or through the linux console on my Centos 5.5 LAMP server, when the email is addressed to go to a domain that is hosted by my box. I think it is something to do with the email routing internally, or the DNS servers that the box uses not reporting the correct MX records. Basically my box doesn't host any mail, it's all hosted on google apps. My name servers are hosted by a 3rd party provider and I am using webmin. Webmin doesn't recognise the settings on the 3rd party provider. I'm unsure how to fix this. Previously when I had this problem on a cpanel server, I would edit the remotedomains and localdomains files, moving domains from one file to another and it would fix the problem. What information do I need to provide for anyone to work out what the issue is? Thanks

    Read the article

  • Problem connecting to isp server using xl2tpd as client. Ubuntu server 13.04

    - by Deon Pretorius
    I have followed guides found on google and ubuntu support pages and can get xl2tpd connection up but only under the following conditions: 1 - ADSL model must be configured and connected to the ISP or 2 - ADSL modem in bridge mode I must have an existing PPPoe connection established. If neither of the above are active xl2tpd wont trigger pppd and connect to the isp and thus tunnel connection fails to connect to the L2TP server of the ISP. Am I doing something wrong; /etc/ppp/options.l2tpd.axxess ipcp-accept-local ipcp-accept-remote refuse-eap refuse-chap require-pap noccp noauth idle 1800 mtu 1200 mru 1200 defaultroute usepeerdns debug lock connect-delay 5000 name (name used for ppp connection) /etc/ppp/pap-secrets # * password (name used for ppp connection as above) * (ppp password supplied by isp) /etc/xl2tpd/xl2tpd.conf [global] ; Global parameters: auth file = /etc/xl2tpd/l2tp-secrets ; * Where our challenge secrets are access control = yes ; * Refuse connections without IP match debug tunnel = yes [lac axxess] lns = 196.30.121.50 ; * Who is our LNS? redial = yes ; * Redial if disconnected? redial timeout = 5 ; * Wait n seconds between redials max redials = 5 ; * Give up after n consecutive failures hidden bit = yes ; * User hidden AVP's? length bit = yes ; * Use length bit in payload? require pap = yes ; * Require PAP auth. by peer require chap = no ; * Require CHAP auth. by peer refuse chap = yes ; * Refuse CHAP authentication require authentication = yes ; * Require peer to authenticate name = BLA85003@axxess ; * Report this as our hostname ppp debug = yes ; * Turn on PPP debugging pppoptfile = /etc/ppp/options.l2tpd.axxess ; * ppp options file for this lac /etc/xl2tpd/l2tp-secrets # Secrets for authenticating l2tp tunnels # us them secret # * marko blah2 # zeus marko blah # * * interop * vzb_l2tp (*** secret supplied by isp) ^ isp server host name Any help will be greatly appreciated

    Read the article

  • Logging library for (c++) games

    - by Klaim
    I know a lot of logging libraries but didn't test a lot of them. (GoogleLog, Pantheios, the coming boost::log library...) In games, especially in remote multiplayer and multithreaded games, logging is vital to debugging, even if you remove all logs in the end. Let's say I'm making a PC game (not console) that needs logs (multiplayer and multithreaded and/or multiprocess) and I have good reasons for looking for a library for logging (like, I don't have time or I'm not confident in my ability to write one correctly for my case). Assuming that I need : performance ease of use (allow streaming or formating or something like that) reliable (don't leak or crash!) cross-platform (at least Windows, MacOSX, Linux/Ubuntu) Wich logging library would you recommand? Currently, I think that boost::log is the most flexible one (you can even log to remotely!), but have not good performance. Pantheios is often cited but I don't have comparison points on performance and usage. I've used my own lib for a long time but I know it don't manage multithreading so it's a big problem, even if it's fast enough. Google Log seems interesting, I just need to test it but if you already have compared those libs and more, your advice might be of good use. Games are often performance demanding while complex to debug so it would be good to know logging libraries that, in our specific case, have clear advantages.

    Read the article

< Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >