Search Results

Search found 30555 results on 1223 pages for 'closed source'.

Page 860/1223 | < Previous Page | 856 857 858 859 860 861 862 863 864 865 866 867  | Next Page >

  • aplay -l says no soundcards found; alsaconf says no supported cords; yet /proc/asound contains cards

    - by nimasmi
    I am trying to get HDMI output using a Gainward Nvidia 210 512 MB on Ubuntu 10.04 Lucid Lynx. I have upgraded alsa-driver, alsa-lib and alsa-utils to 1.0.24 by building from source, thanks to this blog post. Some relevant output... user@box:~$ lspci | grep Audio 00:05.0 Audio device: nVidia Corporation MCP61 High Definition Audio (rev a2) 01:09.0 Multimedia video controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder (rev 05) 01:09.2 Multimedia controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder [MPEG Port] (rev 05) 01:09.4 Multimedia controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder [IR Port] (rev 05) 02:00.1 Audio device: nVidia Corporation High Definition Audio Controller (rev a1) user@box:~$ cat /proc/asound/version Advanced Linux Sound Architecture Driver Version 1.0.24. Compiled on Sep 15 2012 for kernel 2.6.32-42-generic (SMP). user@box:~$ ls /proc/asound` card0 cards hwdep NVidia oss seq version card1 devices modules NVidia_1 pcm timers user@box:~$ aplay -l aplay: device_list:240: no soundcards found... user@box:~$ sudo /sbin/alsa-utils start * Setting up ALSA... * warning: 'alsactl restore' failed with error message 'alsactl: set_control:1403: Cannot write control '2:0:0:IEC958 Playback Default:0' : Operation not permitted'... amixer: Invalid command! ...done. Any help appreciated. PS my video card is connected only through the PCI-E slot. I assume there is no extra audio connection required.

    Read the article

  • Web master tools is throwing out 404 errors on link not on page

    - by plantify
    Webmaster tools is showing thousands of 404 errors, where pages on the site are referring to another incorrect url. For example, URL not found www.plantify.co.uk/shop/=, linked from http://www.plantify.co.uk/shop/gift-voucher and http://www.plantify.co.uk/shop/special-plant-offers. I obviously have checked the source and cannot find any references to this link on any page. The only consistent issue is that it only seems to report this error on pages with two section i.e. www.plantify.co.uk/shop does not report any error whilst all pages with www.plantify.co.uk/shop/xxx (where xxx can be several different pages such as gift-voucher) all report this. I cannot seem to duplicate this error. I have run a link checker (we use Screaming Frog) and it does not report this error. I have fetched these pages as a bot, and these do not report this error. I am at a total loss. I cannot even duplicate the issue, but it is most definitely an issue, as Webmaster Tools is reporting new errors every day. Is this perhaps google bot doing its own thing?

    Read the article

  • Doesn't work Nginx + SSI [migrated]

    - by boopidoopi
    I have some problems. Nginx doesn't work with SSI. Nginx listens 80 port (frontend), apache2 listens 81 port (backend). That is my nginx configurations: server { listen 80; server_name test.dev www.test.dev; error_log /var/log/nginx/error.log debug; log_subrequest on; location / { ssi on; proxy_pass http://localhost:81; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 15m; client_body_buffer_size 128k; } } SSI include in test.dev index.php: <!--# include virtual="http:test.dev/test.html" -- When I open test.dev/index.php I see clean page. In page source: <!--# include virtual="http:test.dev/test.html" -- So how to enable SSI in nginx? Can you help me?

    Read the article

  • How can I get rid of the long Google results URLs?

    - by Teifi
    google.com is always shielded by our firewall. When I search something at google.com, a result list appears. Then I click the link, the URL changes to a processed url like: http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CDcQFjAA&url=http%3A%2F%2Fwww.amazon.com%2F&ei=PE_AUMLmFKW9iAfrl4HoCQ&usg=AFQjCNGcA9BfTgNdpb6LfcoG0sjA7hNW6A&cad=rjt Then my browser is blocked because of google.com I guess. The only useful information in that long like processed URL is http%3A%2F%2Fwww.amazon.com(http://www.amazon.com). My quesitons: What's the meaning of that long like processed URL? Is there a way to remove the header google.com/url?sa.. each time I click the search results?

    Read the article

  • CentOS 5.5 Package documentation

    - by fthinker
    Usually when I install a common package like PostgreSQL or MySQL or Python etc using Yum it installs the files held within those packages into locations specific to CentOS itself. It may also install scripts specific to CentOS only. These paths may not be the same as the defaults found within the source distributions found on the PostgreSQL, MySQL, Python etc project websites and the scripts are usually unique to CentOS. Recently when I installed PostgreSQL under Ubuntu I found some very nice distribution specific information about how the install was organized and how to use the package in a Ubuntu way. I found this information in /usr/share/doc/ Is there any such information included within CentOS?

    Read the article

  • Poppler installation

    - by Menopia
    I downloaded the new poppler 0.15 tar ball and i built it from source successfully but when trying dpkg -l | grep poppler it outputs ii libpoppler-dev 0.14.3-0ubuntu1.1 PDF rendering library -- development files ii libpoppler-glib-dev 0.14.3-0ubuntu1.1 PDF rendering library -- development files (GLib interface) ii libpoppler-glib4 0.12.4-1ubuntu1 PDF rendering library (GLib-based shared library) ii libpoppler-glib5 0.14.3-0ubuntu1.1 PDF rendering library (GLib-based shared library) ii libpoppler5 0.12.4-1ubuntu1 PDF rendering library rc libpoppler6 0.14.2.is.0.14.1-0ubuntu1 PDF rendering library ii libpoppler7 0.14.3-0ubuntu1.1 PDF rendering library ii poppler-utils 0.14.3-0ubuntu1.1 PDF utilitites (based on libpoppler) So AFAIK this means that the new version is not installed !!

    Read the article

  • Are null references really a bad thing?

    - by Tim Goodman
    I've heard it said that the inclusion of null references in programming languages is the "billion dollar mistake". But why? Sure, they can cause NullReferenceExceptions, but so what? Any element of the language can be a source of errors if used improperly. And what's the alternative? I suppose instead of saying this: Customer c = Customer.GetByLastName("Goodman"); // returns null if not found if (c != null) { Console.WriteLine(c.FirstName + " " + c.LastName + " is awesome!"); } else { Console.WriteLine("There was no customer named Goodman. How lame!"); } You could say this: if (Customer.ExistsWithLastName("Goodman")) { Customer c = Customer.GetByLastName("Goodman") // throws error if not found Console.WriteLine(c.FirstName + " " + c.LastName + " is awesome!"); } else { Console.WriteLine("There was no customer named Goodman. How lame!"); } But how is that better? Either way, if you forget to check that the customer exists, you get an exception. I suppose that a CustomerNotFoundException is a bit easier to debug than a NullReferenceException by virtue of being more descriptive. Is that all there is to it?

    Read the article

  • Is it safe to operate a laptop without battery?

    - by leladax
    I know it's 'unsafe' in terms of data loss but I noticed motherboards still have some of their circuits on power when they are plugged in [e.g. a circuit that must wait for power-on signals is certainly one of them]. Hence, I wondered if it would increase the life of the laptop if the battery was simply off. Let alone that may also increase battery life, but that's the least of my concerns. Notice the main point is to plug it off on hibernate and have no power source whatsoever for the duration of being off (apart from the clock battery). (i.e. saving having to plug off the battery every time)

    Read the article

  • Choosing the Database Solution for Large Data Application

    - by GµårÐïåñ
    I have been tasked to write an application that will be a combination of document and inventory management in VB.net which will be used to store document images in TIFF, PDF, XPS, TXT, DOC, PPT and so on as binary data that can be retrieved for viewing, printing, and possible OCR to be searchable as well along with meta data such as sender, recipient, type of document, date, source, etc. So the table would probably be something like: DOC_NAME, DOC_DATE, NOTES, ... DOC_BINARY (where the actual document will be put inside) What my concern is finding a database solution that will not become unstable due to size restrictions, records limitations and performance. Some of the options are MS_SQL, SQL Express, SQLite, mySQL, and Access. Now I can pretty much eliminate Access right off the bat as it is just too limiting and not scalable. I can further eliminate SQL Express because of the 2 GB limit and again scalability. So that leaves me with MS_SQL, SQLite and mySQL (although if anyone has other options they think would be good as well, please feel free to share them, by no means am I set on these only). So this brings me to what you guys think is the best option for what I have described. The goal is that the data is all in one place (a single file) that will make backup and portability easier. For small volume usage, pretty much any solution will hold for a while, but my goal is to think ahead and make sure its able to withstand heavy large volume usage as well. Another consideration is also the interoperability with .NET and stability of such code to avoid errors and memory leaks. Your feedback would be greatly appreciated.

    Read the article

  • Saving 16:9 video in Movie Maker without black border

    - by Tschareck
    I'm editing my video in Windows Live Movie Maker from Live Essentials 2011. My source video is from camera and is .mp4 format with size of 1280 x 720. After editing in Movie Maker, I save the movie. And no matter what option I chose, I always end up with .wmv file, that is either 4:3 image with black stripes above and below the video, or 16:9 with black frame all around the image. What settings should I use, to be able to export or save the video in 1280 x 720 without any black border?

    Read the article

  • Who should have full visibility of all (non-data) requirements information?

    - by ebyrob
    I work at a smallish mid-size company where requirements are sometimes nothing more than an email or brief meeting with a subject matter manager requiring some new feature. Should a programmer working on a feature reasonably expect to have access to such "request emails" and other requirements information? Is it more appropriate for a "program manager" (PGM) to rewrite all requirements before sharing with programmers? The company is not technology-centric and has between 50 and 250 employees. (fewer than 10 programmers in sum) Our project management "software" consists of a "TODO.txt" checked into source control in "/doc/". Note: This is nothing to do with "sensitive data access". Unless a particular subject matter manager's style of email correspondence is top secret. Given the suggested duplicate, perhaps this could be a turf war, as the PGM would like to specify HOW. Whereas WHY is absent and WHAT is muddled by the time it gets through to the programmer(s)... Basically. Should specification be transparent to programmers? Perhaps a history of requirements might exist. Shouldn't a programmer be able to see that history of reqs if/when they can tell something is hinky in the spec? This isn't a question about organizing requirements. It is a question about WHO should have full VISIBILITY of requirements. I'd propose it should be ALL STAKEHOLDERS. Please point out where I'm wrong here.

    Read the article

  • I still can't figure out how to program!

    - by Mark K.
    Please help! I've read lots of programming books for various languages, Java, Python, C, etc. I understand and know all of the basics of the languages and I understand algorithms and data structures. (Equivilant of say 2 years of CompSci classes) BUT, I still can't figure how to write a program that does anything useful. All of the programming books show you how to write the language, but NOT how to use it! The programming examples are all very basic like build a card catalog for a library or a simple game or use algorithms etc... They dont't show you how to develop complex programs that actually do anything useful! I've looked a open-source programs on sourceforge, but they don't make much sense to me. There are hundreds of files in each program & thousands of lines of code. But how do I learn how to do this? There's nothing in any book I can buy on Amazon that will give me the tools to write any of these programs. How do you go from reading Intro to Java or Programming Python, or C Programming Language, etc.. to actually being able to say, I have an idea for X Program.. this is how I go about developing it? It seems like there is so much more involved in writing a program than you can learn in a book or from a class. I feel like there is someth Can anyone put me on the right track?

    Read the article

  • Entity Object Extension in Oracle Application R12

    - by Manoj Madhusoodanan
    In this blog I will explain how to perform Entity Object ( EO ) Extension.As a prerequisite please read my previous blog.I am doing this exercise based on PL/SQL EO. Following attributes are part of FndUserEO. Here I will add a validation to UserName attribute "Length should be > 5". Following steps need to perform. 1) Download all files of  "Entity Object Based on PL/SQL" to JDEV_USER_HOME/myprojects and JDEV_USER_HOME/myclasses.If you want to see the content of source java file decompile it and save it in JDEV_USER_HOME/myprojects. 2) Create new Entity Object XXFndUserEO as follows. Include all attributes of parent EO. 3) Add the validation code snippet to XXFndUserEOImpl.java as follows. 4) Create the substitution as follows. 5) Migrate files to $JAVA_TOP. xxcustom.oracle.apps.fnd.user.schema.server.XXFndUserEOImpl.javaxxcustom.oracle.apps.fnd.user.schema.server.XXFndUserEO.xml 6) Migrate the substitution.. 7) Bounce the server. 8) Verify the substitution has applied properly. Access Create User Page and create a User. You can see the validation message if user name length is less than 5. Give User Name as XXCUST4 and verify the table.   The FND_USER has created successfully.

    Read the article

  • Rsync fails for files that start with underscore when destination is zfs

    - by Eric
    everyone. I'm using rsync3.1.0pre1 on Mac OS X 10.8.5, and am trying to rsync one folder to another. The destination is a ZFS volume mounted via SMB. The problem I'm having is that files that start with underscore (e.g., '_filename.jpg') are not being successfully synced to the destination. I get the following error message: rsync: mkstemp "/path/to/destination/._filename.jpg.NUgYJw" failed: Permission denied (13) In this case, '_filename.jpg' does not make it to the destination. I understand that rsync creates hidden, temporary files at the destination which are preceded with '.' and have a random file extension appended on the end. But the original filename starts with '', not '.', and I haven't asked rsync to copy extended attributes / resource forks over (unless it always does it). The rsync command I'm using is: rsync -avE --exclude='.DS_Store' --exclude '.Trash' --exclude 'Thumbs.db' --exclude '._*' --delete /source/ /destination/ Has anyone found a way around this problem? Thank you!

    Read the article

  • Natural talent vs experience [on hold]

    - by Tord Johansson Munk
    Hi i have a question for you guys if you had a choice of hiring one of two programmers. One of them is a natural born programming talent, he has been programming since he was 14 year old and he has been programming all sorts of things by him self, 3d renders,games,his own frameworks, he is really good at algorithms and problem solving. He is now about 25 years old and is looking for a job after some unchallenged years of college the only experience he has is working on his own/university stuff and some open source project. This guy spends all his free time programming and has several pet projects at home. The other person is a 37 year old career programmer. He has been programming since he graduated from university at the age of 26 and have been working since then. He did not have an interest in programming before university. During his studies he discovered that programming was fun and challenging but it never was a "passion". During his career he mainly worked with "enterprise" platforms such as .net or javaEE. He mainly have done database business applications and thus is lacking skills of the young talent like abstract problem solving or algorithms. But he know the tools he has been using during the years and is reliable and almost always makes his boss happy. He keeps him self updated in the platform and tools he has and is using. But outside the office walls he don't touch any code at all. Witch one would you hire? Would you favor one of them in certain projects? Do you think that if the young talent learns his tools he will be a better programmer than the older one? Would your decision be different if both of them where lacking a degree? or if only one of them was lacking a degree be the old and experienced or the young genius.

    Read the article

  • How do I setup a syslog server for my network?

    - by Solignis
    I would like to setup a syslog server to forward all log file from all of my VMs and servers. I really don't much about what is out there. So I turn to the community, Something on Linux is fine, what I want more is alert ability like emails telling me something is not right. If there was something to sort the logs by source that would be cool. Where would I want run the syslog server from? My admin WS or a server/VM? Any input would be wonderful. Thanks in advance.

    Read the article

  • BackupPC - are full backups really full when using rsync?

    - by mhost
    Hi, When you run a full backup in backuppc and you use rsync as the transfer method, does it actually transfer the full backup source? Or does it only transfer the changes? The docs seem to imply that it would transfer the full thing and only an incremental would transfer the changes. If this is the case, could I simply use incrementals only, and never do a full backup? The way the backups are stored (using hard links to make each incremental appear full), I would think that this would be the best method. Incrementals will only transfer the changes, yet each backup will appear full. Thanks.

    Read the article

  • is wisdom of what happens 'behind scenes' (in compiler, external DLLs etc.) important?

    - by I_Question_Things_Deeply
    I have been a computer-fanatic for almost a decade now. I've always loved and wondered how computers work, even from the purest, lowest hardware level to the very smallest pixel on the screen, and all the software around that. That seems to be my problem though ... as I try to write code (I'm pretty fluent at C++) I always sit there enormous amounts of time in front of a text-editor wondering how every line, statement, datum, function, etc. will correspond to every Assembly and machine instruction performed to do absolutely everything necessary for the kernel to allocate memory to run my compiled program, and all of the other hardware being used as well. For example ... I would write cout << "Before memory changed" << endl; and run the debugger to get the Assembly for this, and then try and reverse disassemble the Assembly to machine code based on my ISA, and then research every .dll, library file, linked library, linking process, linker source code of the program, the make file, the kernel I'm using's steps of processing this compilation, the hardware's part aside from the processor (e.g. video card, sound card, chipset, cache latency, byte-sized registers, calling convention use, DDR3 RAM and disk drive, filesystem functioning and so many other things). Am I going about programming wrong? I mean I feel I should know everything that goes on underneath English syntax on a computer program. But the problem is that the more I research every little thing the less I actually accomplish at all. I can never finish anything because of this mentality, yet I feel compelled to know everything... what should I do?

    Read the article

  • Encapsulating code in F# (Part 2)

    - by MarkPearl
    In part one of this series I showed an example of encapsulation within a local definition. This is useful to know so that you are aware of the scope of value holders etc. but what I am more interested in is encapsulation with regards to generating useful F# code libraries in .Net, this is done by using Namespaces and Modules. Lets have a look at some C# code first… using System; namespace EncapsulationNS { public class EncapsulationCLS { public static void TestMethod() { Console.WriteLine("Hello"); } } } Pretty simple stuff… now the F# equivalent…. namespace EncapsulationNS module EncapsulationMDL = let TestFunction = System.Console.WriteLine("Hello") ()   Even easier… lets look at some specifics about F# namespaces… Namespaces are open. meaning you can have multiple source files and assemblies can contribute to the same namespace. So, Namespaces are a great way to group modules together, so the question needs to be asked, what role do modules play. For me, the F# module is in many ways similar to the vb6 days of modules. In vb6 modules were separate files and simply allowed us to group certain methods together. I find it easier to visualize F# modules this way than to compare them to the C# classes. However that being said one is not restricted to one module per file – there is flexibility to have multiple modules in one code file however with my limited F# experience I would still recommend using the file as the standard level of separating modules as it is very easy to then find your way around a solution. An important note about interop with F# and other .Net languages. I wrote a blog post a while back about a very basic F# to C# interop. If I were to reference an F# library in a C# project (for instance ‘TestFunction’), in C# it would show this method as a static method call, meaning I would not have to instantiate an instance of the module.

    Read the article

  • Server Backup Solutions - compiling?

    - by Webnet
    I've been researching backup solutions for a LAMP environment to backup our databases and files alike. I'm looking for open source with a UI (so I'm less likely to screw it up). I downloaded http://www.bacula.org/en/ and a few others but they all talk about compiling first.... this doesn't seem like something I should need to do.... is there a linux package that maybe handles backups that I don't know about? I should also specify I'm looking to setup a backup server which backs up from several locations.

    Read the article

  • How do 2D physics engines solve the problem of resolving collisions along tiled walls/floors in non-grid-based worlds?

    - by ssb
    I've been working on implementing my SAT algorithm which has been coming along well, but I've found that I'm at a wall when it comes to its actual use. There are plenty of questions regarding this issue on this site, but most of them either have no clear, good answer or have a solution based on checking grid positions. To restate the problem that I and many others are having, if you have a tiled surface, like a wall or a floor, consisting of several smaller component rectangles, and you traverse along them with another rectangle with force being applied into that structure, there are cases where the object gets caught on a false collision on an edge that faces the inside of the shape. I have spent a lot of time thinking about how I could possibly solve this without having to resort to a grid-based system, and I realized that physics engines do this properly. What I want to know is how they do this. What do physics engines do beyond basic SAT that allows this kind of proper collision resolution in complex environments? I've been looking through the source code to Box2D trying to find out how they do it but it's not quite as easy as looking at a Collision() method. I think I'm not good enough at physics to know what they're doing mathematically and not good enough at programming to know what they're doing programmatically. This is what I aim to fix.

    Read the article

  • Velvet screen after grub selection

    - by Spleen
    After a fresh install of Ubuntu 11.10 64-bit, the boot seems to stop after selecting the ubuntu option (same with the rescue one) in the grub menu. At first I thought this was related to grub-efi, as I've had similar problems after a Ubuntu 11.04 update which replaced grub-efi with grub-pc and got me stuck on a "elf magic" grub console (https://bugs.launchpad.net/ubuntu/+source/apt/+bug/800910). While the 11.04 problem was resolved with a simple chroot and apt-get install from the live cd, that solution doesn't work this time. The drive with the bootloader is a sata3 ssd with 64 gb gpt (sdb1 20 mb efi boot partition fat16, sdb2 60 gb root ext4 and sdb3 4 gb swap) on a msi e350ia-e45 mainboard with a pair of 2 TB ext4 mbr drives for photos/music/movies. I've tried a few grub-install/update-grub with boot-directory sdb1 from chroot, but I cant seem to go anywhere. Even this guide: http://en.gentoo-wiki.com/wiki/Grub2#EFI (ofc I replaced grub2 with grub in the grub-install and efibootmgr commands) doesnt seem to get me anywhere. Any help or ideas are appeciated ;) edit: I guess its the combination of gpt/uefi that also seems to haunt f16 edit: same with 12.04 beta btw

    Read the article

  • How to verify the code that could take a substantial time to compile? [on hold]

    - by user18404
    As a follow up to my prev question: What is the best aproach for coding in a slow compilation environment To recap: I am stuck with a large software system with which a TDD ideology of "test often" does not work. And to make it even worse the features like pre-compiled headers/multi-threaded compilation/incremental linking, etc is not available to me - hence I think that the best way out would be to add the extensive logging into the system and to start "coding in large chunks", which I understand as code for a two-three hours first (as opposed to 15-20 mins in TDD) - thoroughly eyeball the code for a 15 minutes and only after all that do the compilation and run the tests. As I have been doing TDD for a quite a while, my code eyeballing / code verification skills got rusty (you don't really need this that much if you can quickly verify what you've done in 5 seconds by running a test or two) - so I am after a recommendations on how to learn these source code verification/error spotting skills again. I know I was able to do that easily some 5-10 years ago when I din't have much support from the compiler/unit testing tools I had until recently, thus there should be a way to get back to the basics.

    Read the article

  • What would cause SQL 2008 Log Reader Agent to fail with "This process could not execute 'sp_replcmds

    - by Rick
    I've seen this error message in other posts. They didn't seem to help resolving our issue. We are trying this with two SQL Server 2008 servers. I backed up my database from the source server and then restored it on our destination server. We setup basic Transaction Replication. The Snapshot Agent is working fine. The Log Reader Agent fails with the error above. Is it most likely a login issue for this job or QueryTimeout?

    Read the article

  • Complex nagios command

    - by gonvaled
    I have defined the following command for one of my service checks: define command{ command_name mycommand command_line $USER1$/check_by_ssh -p $ARG1$ -l nagios -i /etc/nagios2/keys/key1 -H $HOSTADDRESS$ -v -C 'source $USER10$ ; command.py -a get --alert-name $ARG2$ -q' } The problem is that it seems that nagios is parsing the command with the semicolon, and producing garbage which can not be executed. I have tried also putting a backslash \;, to no avail. If I run the command directly on the shell, it works. Which means that this is not a problem with check_by_ssh, but a problem on the parsing of the nagios configuration file. How can I debug this? Is there a way to get a listing of all the commands that nagios has parsed when reading the configuration files?

    Read the article

< Previous Page | 856 857 858 859 860 861 862 863 864 865 866 867  | Next Page >