Search Results

Search found 25362 results on 1015 pages for 'compiling from source'.

Page 658/1015 | < Previous Page | 654 655 656 657 658 659 660 661 662 663 664 665  | Next Page >

  • What are some best practices for minimizing code?

    - by CrystalBlue
    While maintaining the sites our development team has created, we have come across include files and plugins that have proven to be very useful to more then one part of our applications. Most of these modules have come with two different files, a normal source file and a min file. Seeing that the performance and speed of a page can be increased by minimizing the size of the file, we're looking into doing that to our pages as well. The problem that we run into is a lot of our normal pages (written in ASP classic) is a mix of HTML, ASP, Javascript, CSS, and include files. We have some pages that have their JS both in include files and in the page, depending on if the function is only really used in that page or if it's used in many other pages. For example, we have a common.js and an ajax.js file, both are used in a lot of pages, but not all of them. As well as having some functions in a page that doesn't really make sense to put into one master page. What I have seen a few other people do online is use one master JS file and place all of their javascript into that, minify it, gzip it, and only use that on their production server. Again, this would be great, but I don't know if that fully works for our purposes. What I'm looking for is some direction to go with on this. I'm in favor of taking all of our JS and putting it in one include file, and just having it included in every page that is hit. However, not every page we have needs every bit of JS. So would it be worth the compilation and minifying of the files into one master file and include it everywhere, or would it be better to minify all other files and still include them on a need-to-use basis?

    Read the article

  • How do I get a Dell Latitude e6420 working?

    - by David_G
    I've just installed Ubuntu 12.04 (64-bit) on a brand new Dell Latitude e6420, and I'm having a few problems. This laptop has an Optimus (?) setup - i.e. integrated gfx and an Nvidia Quadro NVS 4200M. First problem - I ran setup, etc, and discovered that I can only run unity2d - If I try and login with unity3d, it just defaults to 2d. This is with nvidia-current installed (302.07). Note also that I can't run nvidia-settings ("You do not appear to be using the NVIDIA X driver."), and there is no additional drivers found ("No proprietary drivers are in use on this system"). I tried to troubleshoot this, and removed nvidia, leaving (I guess) just Nouveau drivers - In that case, unity3d did work, but I was stuck with the open source Nouveau drivers powering the integrated graphics. So, obviously, I want to run unity3d, and be using the more powerful Nvidia graphics card. I've tried a bit of tinkering around, but I'm not sure the best way to proceed, or perhaps more importantly, I'm not sure of what the best final solution might be. I've heard about bumblebee - but frankly, I would prefer to have the proprietary Nvidia drivers working properly. Any help would be much appreciated!

    Read the article

  • I don't program in my spare time. Does that make me a bad developer?

    - by not-my-real-name
    A lot of blogs and advice on the web seem to suggest that in order to become a great developer, doing just your day job is not enough. For example, you should contribute to open source projects in your spare time, write smartphone apps, etc. In fact a lot of this advice seems to suggest that if you don't love programming enough to do it all day long then you're probably in the wrong career. That doesn't ring true with me. I enjoy my work, but when I come home from the office I'm not in the mood to jump straight back onto the computer and start coding away until bedtime. I only have a certain number of hours free time each day, and I'd rather spend them on other hobbies, seeing friends or going outside than in front of the computer. I do get a kick out of programming, and do hack around outside of work occasionally. I'm committed to my personal development and spend time reading tech blogs and books as a way to keep learning and becoming better. But that doesn't extend so far as to my wanting to use all my spare time for coding. Does this mean I'm not a 'true' software developer at heart? Is it possible to become a good software developer without doing extra outside your job? I'd be very interested to hear what you think.

    Read the article

  • Apache 2.0 is showing only text and not php files

    - by denonth
    I have a web application written with PHP, html and JavaScript. On my pc I have installed a EasyPHP program which has Apache and everything installed. But I wanted to put this web app to my server and I have installed a Apache 2.0 but my php files are displayed as text or it starts to download them automatically. I have tried several things one of them is to add this to my conf file: AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps But it's still not working. What else can I do? Thank you

    Read the article

  • Pull Request Conversations, Inline Diff Enhancements

    [Do you tweet? Follow us on Twitter @matthawley and @adacole_msft] We deployed a new version of the CodePlex website today. Pull Request Conversations Previously, the only way for project members and users who submitted pull requests to converse was via e-mail. This complicated the review process and made conversations isolated and difficult to track. For this release, we’ve added functionality that enables you to have those same conversations within the pull request page. When you view a pull request, you’ll now see “Comments” and “Changes” tabs, with current comments displayed. Inline Diff Enhancements We tweaked the inline diff experience to make it easier to traverse diff blocks. When you open up the inline diff experience, you’ll now see up and down arrows. To move between the diff blocks, you can use those arrows or utilize the available keyboard shortcuts. Lastly, we have also brought the inline diff experience to the source control changes page for project and fork changesets. You can see both enhancements live by viewing the associated pull request or changeset changes on WikiPlex. The CodePlex team values your feedback. We are frequently monitoring Twitter, our Discussions, and Issue Tracker. If you have not visited the Issue Tracker recently, please take a few minutes to suggest or vote on a feature you would like to see implemented.

    Read the article

  • Is there any kind of established architecture for browser based MMO games?

    - by black_puppydog
    I am beginning the development of a broser based game in which players take certain actions at any point in time. Big parts of gameplay will be happening in real life and just have to be entered into the system. I believe a good kind of comparison might be a platform for managing fantasy football, although I have virtually no experience playing that, so please correct me if I am mistaken here. The point is that some events happen in the program (i.e. on the server, out of reach for the players) like pulling new results from some datasource, starting of a new round by a game master and such. Other events happen in real life (two players closing a deal on the transfer of some team member or whatnot - again: have never played fantasy football) and have to be entered into the system. The first part is pretty easy since the game masters will be "staff" and thus can be trusted to a certain degree to not mess with the system. But the second part bothers me quite a lot, especially since the actions may involve multiple steps and interactions with different players, like registering a deal with the system that then has to be approved by the other party or denied and passed on to a game master to decide. I would of course like to separate the game logic as far as possible from the presentation and basic form validation but am unsure how to do this in a clean fashion. Of course I could (and will) put some effort into making my own architectural decisions and prototype different ideas. But I am bound to make some stupid mistakes at some point, so I would like to avoid some of that by getting a little "book smart" beforehand. So the question is: Is there any kind of architectural works that I can read up on? Papers, blogs, maybe design documents or even source code? Writing this down this seems more like a business application with business rules, workflows and such... Any good entry points for that?

    Read the article

  • How can I measure TCP timeout limit on NAT firewall for setting keepalive interval?

    - by jmanning2k
    A new (NAT) firewall appliance was recently installed at $WORK. Since then, I'm getting many network timeouts and interruptions, especially for operations which would require the server to think for a bit without a response (svn update, rsync, etc.). Inbound SSH sessions over VPN also timeout frequently. That clearly suggests I need to adjust the TCP (and ssh) keepalive time on the servers in question in order to reduce these errors. But what is the appropriate value I should use? Assuming I have machines on both sides of the firewall between which I can make a connection, is there a way to measure what the time limit on TCP connections might be for this firewall? In theory, I would send a packet with gradually increasing intervals until the connection is lost. Any tools that might help (free or open source would be best, but I'm open to other suggestions)? The appliance is not under my control, so I can't just get the value, though I am attempting to ask what it currently is and if I can get it increased.

    Read the article

  • Reverse Proxy to filter out js files from multiple hosts in nginx

    - by stwissel
    I have a website http://someplace.acme.com that I want my users to access via http://myplace.mycorp.com - pretty standard reverse proxy setup. The special requirement: any js file - either identified by the .js extension and/or the mime-type (if that is possible) text/javascript needs to be served from a different location, a local tool that inspects the js for potential threats. So I have location / { proxy_pass http://someplace.acme.com; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location ~* \.(js)$ { proxy_pass http://127.0.0.1:8188/filter?source=$1; proxy_redirect off; proxy_buffering off; } The JS still is served from remote and I have no idea how to check for the mime type. What do I miss?

    Read the article

  • Working with Google Webmaster Tools

    - by com
    My first question is about Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. One of them is HTTP. I assume that all broken links in HTTP was somehow found by crawler, this is not the links from sitemap. If this was found by scanning all sitemap pages for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. And what the meaning of Linked From, I thought if the name of section is sitemap, therefore all URLs should be taken from sitemap, so why there is Linked From? The second question, what is the best way to trreat searching on the site. How come the searching result page are getting indexed? Because of the fact that all searching result page are getting indexed, I have to many page in Linked From. What's the right practice? Question three: In order to improve response time in WMT, can I redirect all crawler's requests to designated free web server? Is this good practice? Question four: How should I treat Google Analytics Code (with parameters PageView, PageLoadTime), in the case user request non existing page, should I render Google code or not? Right now I use Google Analytics Code on the common template page, such that every page, also non existing page with error message contains Google Analytics Code, it seems like it has influence on WMT.

    Read the article

  • Protected flash video (requires HAL) on Gentoo

    - by Mala
    I am unable to play "protected" flash video, such as Amazon Prime Instant Video. From what I've read and uncovered, this seems to be due to a lack of HAL being installed on my computer. Confirmation that it is required for protected video can be seen towards the beginning of http://helpx.adobe.com/x-productkb/multi/flash-player-11-problems-playing.html However, hal is not in the gentoo portage tree, and in any case has been deprecated and replaced by udev. How can I go about getting Amazon Prime Instant Video to work again? I was considering grabbing the source from http://www.freedesktop.org/wiki/Software/hal but the links there won't load, and trying to install it from old ebuilds or from overlays which claim to still support it (e.g. kde-sunset) result in a compilation error: In file included from addon-generic-backlight.c:38:0: /usr/include/glib-2.0/glib/gmain.h:21:2: error: #error "Only <glib.h> can be included directly." Has anyone else solved this issue?

    Read the article

  • Could these people get arrested? [closed]

    - by Vinicius Horta
    I have seen many of what is called 'private servers' mmorpg (multiplayer online games), which uses stolen sources,modified executable, clients and server. People launch up their own server using VPS or dedicated server and distribute online service among players disclaiming it has educational purposes only, saying they are studying the game engine and selling items for players disclaming it as 'donations', so it seems like they are getting donations to keep studying. We all know it's a comercial method. All of it is copyrighted material from enterprise ABCD. (ABCD = Fictional name, I'm not mentioning names). At their website they include the following: "Private Server XXXX" does not allow/support any conection to any company/organization associated with the game "XXXX". If you are anyway affiliated with enterprise "XXXXXXX", or any other company/organization associated with the game "XXXX" you may not view/open/read/execute/play/download any part of "Private Server XXXX" nor view "Private Server XXXX" website, if any company/organization requests you to investigate our website/server, you may not view "Private SERVER XXXX" or execute any action mentioned above. Any person caught disobeting this disclaimer will be punish to the fulleste extent of law. Can this guys get arrested? Do they disclaimer works? If I'm owner of enterprise X and I know people stole my source and are using them but they have such disclaimer I'm not allowed to investigate them?

    Read the article

  • Headphones not working in ubuntu 12.04LTS

    - by mursalat
    So after a million warnings about not supported ubuntu OS, last night I finally upgraded to 12.04 - went somewhat smoothly (sadly) After my installation I got all excited with the exciting new look when I login, and all the new shebang, gently I installed chrome and went to youtube to check out some of my music and test flash in the process, now the sound worked awesomely, however when I plugged in my headphones, I could hear only a buzz-like sound when the base drops or there is a loud noise, maybe I didn't hear it from the headphones, but some other source. However sadly my headphones are not working, and I am a noob at fixing these stuff on Ubuntu. I've done loads of programming but when it comes to linux drivers and settings I seem to get frustrated. So I would really appreciate any help people! IN SUMMARY: My headphones are not working, my laptop's internal speakers are working awesome. To be as helpful as I can I have pasted the output from lspci -v to http://pastebin.com/VQNzDkZs I have also checked the volume levels from alsamixer and none are on mute. If you need any more information, please just ask, and I will be checking this post every 3 to 4 hours! Cheers!

    Read the article

  • Good links somehow being converted to ones with a PHP redirect (not a virus)

    - by Rebecca
    This has happened to links we put on web pages and in emails. We might put www.oursite.org/work/ but when I view source it shows up as webmail.ourhosting.ca/hwebmail/services/go.php?url=https%3A%2F%2Fwww.oursite.org%2F%2work%2F This ends up at the webmail login page for our web host. But only some of the people who click the link get the login page; others go directly to the original page we intended. We don't want it to go to the webmail login page, nobody needs to log in to our web site. This occurs for links to pages on our site, but also to links to other sites that we put in emails or in posts. It seems to be browser independent as well as e-mail client independent as we variously have used Firefox and Chrome as well as MS Outlook and Thunderbird. I've tried to resolve the issue with our webhost but they keep telling me they don't support our browser, or our email client (i.e., they don't understand the issue). At the moment, our only option is to try another web host just to get rid of their login. Any ideas about what's going on?

    Read the article

  • Nginx proxy SOAP request

    - by user2606078
    looking for a right way to accomplish the following: there is an app that have URL(1) hardcoded and no way/time to change it in the source http://dev.server.com/example.com/admin/soap/action/index?pr=1 and it should use (and get response from) URL(2) http://example.com/admin/soap/action/index?pr=1 what should I configure in Nginx (apache as backup used) conf on dev.server.com in order to give that app when it asks URL(1) answer from URL(2)? On dev.server.com Apache has virtual host: dev.server.com enabled. Also I've tried to proxy in apache instead of nginx by using ProxyPass: <Directory /var/www/dev> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny allow from all </Directory> <Location /example.com/admin/soap> ProxyPass http://example.com/admin/soap </Location>

    Read the article

  • aplay -l says no soundcards found; alsaconf says no supported cords; yet /proc/asound contains cards

    - by nimasmi
    I am trying to get HDMI output using a Gainward Nvidia 210 512 MB on Ubuntu 10.04 Lucid Lynx. I have upgraded alsa-driver, alsa-lib and alsa-utils to 1.0.24 by building from source, thanks to this blog post. Some relevant output... user@box:~$ lspci | grep Audio 00:05.0 Audio device: nVidia Corporation MCP61 High Definition Audio (rev a2) 01:09.0 Multimedia video controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder (rev 05) 01:09.2 Multimedia controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder [MPEG Port] (rev 05) 01:09.4 Multimedia controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder [IR Port] (rev 05) 02:00.1 Audio device: nVidia Corporation High Definition Audio Controller (rev a1) user@box:~$ cat /proc/asound/version Advanced Linux Sound Architecture Driver Version 1.0.24. Compiled on Sep 15 2012 for kernel 2.6.32-42-generic (SMP). user@box:~$ ls /proc/asound` card0 cards hwdep NVidia oss seq version card1 devices modules NVidia_1 pcm timers user@box:~$ aplay -l aplay: device_list:240: no soundcards found... user@box:~$ sudo /sbin/alsa-utils start * Setting up ALSA... * warning: 'alsactl restore' failed with error message 'alsactl: set_control:1403: Cannot write control '2:0:0:IEC958 Playback Default:0' : Operation not permitted'... amixer: Invalid command! ...done. Any help appreciated. PS my video card is connected only through the PCI-E slot. I assume there is no extra audio connection required.

    Read the article

  • Web master tools is throwing out 404 errors on link not on page

    - by plantify
    Webmaster tools is showing thousands of 404 errors, where pages on the site are referring to another incorrect url. For example, URL not found www.plantify.co.uk/shop/=, linked from http://www.plantify.co.uk/shop/gift-voucher and http://www.plantify.co.uk/shop/special-plant-offers. I obviously have checked the source and cannot find any references to this link on any page. The only consistent issue is that it only seems to report this error on pages with two section i.e. www.plantify.co.uk/shop does not report any error whilst all pages with www.plantify.co.uk/shop/xxx (where xxx can be several different pages such as gift-voucher) all report this. I cannot seem to duplicate this error. I have run a link checker (we use Screaming Frog) and it does not report this error. I have fetched these pages as a bot, and these do not report this error. I am at a total loss. I cannot even duplicate the issue, but it is most definitely an issue, as Webmaster Tools is reporting new errors every day. Is this perhaps google bot doing its own thing?

    Read the article

  • Free tool to automatically deskew and crop PDF made up of scanned pages [closed]

    - by Pietro M.
    I have several PDFs made up of book pages' scans. The scans are made from two pages at a time and some of these scans are skewed, making text appear slightly tilted. I'm looking for a tool that could allow me to do an automatic optimization by deskewing the scans without losing readability. I've found the GPL software briss to crop the scans in order to have a 1:1 page ratio instead of 2:1, but I don't have any tool to deskew the pages. I stumbled upon unpaper, another open source tool that seems perfect for what I want to do, but that tool is Linux only and it doesn't work on PDF files directly. Any hint is appreciated. Thank you.

    Read the article

  • Organize code in Chef: libraries, classes and resources

    - by ColOfAbRiX
    I am new to both Chef and Ruby and I am implementing some scripts to learn them. Now I am facing the problem of how to organize my code: I have created a class in the library directory and I have used a custom namespace to maintain order. This is a simplified example of my file: # ~/chef-repo/cookbooks/mytest/libraries/MyTools.rb module Chef::Recipe::EP class MyTools def self.print_something( text ) puts "This is my text: #{text}" end def self.copy_file( dir, file ) cookbook_file "#{dir}/#{file}" do source "#{dir}/#{file}" end end end end From my recipe I call both methods: # ~/chef-repo/cookbooks/mytest/recipes/default.rb EP::MyTools.print_something "Hello World!" EP::MyTools.copy_file "/etc", "passwd" print_something works fine, but with copy_file I get this error: undefined method `cookbook_file' for Chef::Recipe::EP::FileTools:Class It's clear to me that I don't know how to create libraries in Chef or I don't know some basic assumptions. Can anyone help me, please? I am looking for a solution of this problem (organize my code, libraries, use resources in classes) or, better, a good Chef documentation as I find the documentation very deficient in clarity and disorganized so that research through it is a pain.

    Read the article

  • Realtime file-level mirroring from local NTFS to network drive

    - by hurfdurf
    We have some data collection machines running WinXP. After a new file is written, we would like to immediately copy the new file to network storage (a NetApp CIFS share) automagically. We need realtime or near realtime copies generated (copy upon filehandle close would be fine -- these are not long-running system logs). Two commercial applications I've found so far are MirrorFile and IBM's Tivoli CDP. Are there any reliable open source programs or simple ways to get Shadow Copy to do something similar? Bonus points if it runs as a service.

    Read the article

  • Diving into a computer science career [closed]

    - by Willis
    Well first I would like to say thank you for taking the time to read my question. I'll give you some background. I graduated two years ago from a local UC in my state with a degree in cognitive psychology and worked in a neuroscience lab. During this time I was exposed to some light Matlab programming and other programming tidbits, but before this I had some basic understanding of programming. My father worked IT for a company when I was younger so I picked up his books and took learned things along the way growing up. Naturally I'm an inquisitive person, constantly learning, love challenges, and have had exposure to some languages. Yet at this point I was fully pursue it as a career and always had this in the back of my head. Where do I start? I'm 25 and feel like I still have time to make a switch. I've immersed myself in the terminal/command prompt to start, but which language do I focus on? I've read the A+ book and planning to take on the exam, then the networking exam, but I want to deal with more programming, development, and troubleshooting. I understand to get involved in open source, but where? I took the next step and got a small IT assistant job, but doesn't really deal with programming, development, just troubling shooting and small network issues. Thank you!

    Read the article

  • Saving 16:9 video in Movie Maker without black border

    - by Tschareck
    I'm editing my video in Windows Live Movie Maker from Live Essentials 2011. My source video is from camera and is .mp4 format with size of 1280 x 720. After editing in Movie Maker, I save the movie. And no matter what option I chose, I always end up with .wmv file, that is either 4:3 image with black stripes above and below the video, or 16:9 with black frame all around the image. What settings should I use, to be able to export or save the video in 1280 x 720 without any black border?

    Read the article

  • Who should have full visibility of all (non-data) requirements information?

    - by ebyrob
    I work at a smallish mid-size company where requirements are sometimes nothing more than an email or brief meeting with a subject matter manager requiring some new feature. Should a programmer working on a feature reasonably expect to have access to such "request emails" and other requirements information? Is it more appropriate for a "program manager" (PGM) to rewrite all requirements before sharing with programmers? The company is not technology-centric and has between 50 and 250 employees. (fewer than 10 programmers in sum) Our project management "software" consists of a "TODO.txt" checked into source control in "/doc/". Note: This is nothing to do with "sensitive data access". Unless a particular subject matter manager's style of email correspondence is top secret. Given the suggested duplicate, perhaps this could be a turf war, as the PGM would like to specify HOW. Whereas WHY is absent and WHAT is muddled by the time it gets through to the programmer(s)... Basically. Should specification be transparent to programmers? Perhaps a history of requirements might exist. Shouldn't a programmer be able to see that history of reqs if/when they can tell something is hinky in the spec? This isn't a question about organizing requirements. It is a question about WHO should have full VISIBILITY of requirements. I'd propose it should be ALL STAKEHOLDERS. Please point out where I'm wrong here.

    Read the article

  • Are null references really a bad thing?

    - by Tim Goodman
    I've heard it said that the inclusion of null references in programming languages is the "billion dollar mistake". But why? Sure, they can cause NullReferenceExceptions, but so what? Any element of the language can be a source of errors if used improperly. And what's the alternative? I suppose instead of saying this: Customer c = Customer.GetByLastName("Goodman"); // returns null if not found if (c != null) { Console.WriteLine(c.FirstName + " " + c.LastName + " is awesome!"); } else { Console.WriteLine("There was no customer named Goodman. How lame!"); } You could say this: if (Customer.ExistsWithLastName("Goodman")) { Customer c = Customer.GetByLastName("Goodman") // throws error if not found Console.WriteLine(c.FirstName + " " + c.LastName + " is awesome!"); } else { Console.WriteLine("There was no customer named Goodman. How lame!"); } But how is that better? Either way, if you forget to check that the customer exists, you get an exception. I suppose that a CustomerNotFoundException is a bit easier to debug than a NullReferenceException by virtue of being more descriptive. Is that all there is to it?

    Read the article

  • CentOS 5.5 Package documentation

    - by fthinker
    Usually when I install a common package like PostgreSQL or MySQL or Python etc using Yum it installs the files held within those packages into locations specific to CentOS itself. It may also install scripts specific to CentOS only. These paths may not be the same as the defaults found within the source distributions found on the PostgreSQL, MySQL, Python etc project websites and the scripts are usually unique to CentOS. Recently when I installed PostgreSQL under Ubuntu I found some very nice distribution specific information about how the install was organized and how to use the package in a Ubuntu way. I found this information in /usr/share/doc/ Is there any such information included within CentOS?

    Read the article

  • Doesn't work Nginx + SSI [migrated]

    - by boopidoopi
    I have some problems. Nginx doesn't work with SSI. Nginx listens 80 port (frontend), apache2 listens 81 port (backend). That is my nginx configurations: server { listen 80; server_name test.dev www.test.dev; error_log /var/log/nginx/error.log debug; log_subrequest on; location / { ssi on; proxy_pass http://localhost:81; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 15m; client_body_buffer_size 128k; } } SSI include in test.dev index.php: <!--# include virtual="http:test.dev/test.html" -- When I open test.dev/index.php I see clean page. In page source: <!--# include virtual="http:test.dev/test.html" -- So how to enable SSI in nginx? Can you help me?

    Read the article

< Previous Page | 654 655 656 657 658 659 660 661 662 663 664 665  | Next Page >