Search Results

Search found 19908 results on 797 pages for 'bit ly'.

Page 341/797 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • indirect rendering issue on 12.04, using ati driver

    - by lurscher
    I have ubuntu 12.04 64-bit system, when i run glxinfo i see some strange error about indirect rendering and failed to load some lib32/dri/swrast_dri libraries. Any idea what is going on? please let me know if i can enhance the relevant information provided in this question $ LIBGL_DEBUG=verbose glxinfo name of display: :0 libGL: screen 0 does not appear to be DRI2 capable libGL: OpenDriver: trying /usr/lib32/dri/tls/swrast_dri.so libGL: OpenDriver: trying /usr/lib32/dri/swrast_dri.so libGL error: dlopen /usr/lib32/dri/swrast_dri.so failed (/usr/lib32/dri/swrast_dri.so: cannot open shared object file: No such file or directory) libGL: OpenDriver: trying /usr/lib/dri/tls/swrast_dri.so libGL: OpenDriver: trying /usr/lib/dri/swrast_dri.so libGL error: dlopen /usr/lib/dri/swrast_dri.so failed (/usr/lib/dri/swrast_dri.so: cannot open shared object file: No such file or directory) libGL error: unable to load driver: swrast_dri.so libGL error: reverting to indirect rendering display: :0 screen: 0 direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose) server glx vendor string: ATI server glx version string: 1.4 server glx extensions: GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_OML_swap_method, GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group client glx vendor string: Mesa Project and SGI client glx version string: 1.4 client glx extensions: GLX_ARB_create_context, GLX_ARB_create_context_profile, GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_framebuffer_sRGB, GLX_EXT_create_context_es2_profile, GLX_MESA_copy_sub_buffer, GLX_MESA_multithread_makecurrent, GLX_MESA_swap_control, GLX_OML_swap_method, GLX_OML_sync_control, GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGI_video_sync, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group, GLX_EXT_texture_from_pixmap, GLX_INTEL_swap_event GLX version: 1.4 GLX extensions: GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_multithread_makecurrent, GLX_OML_swap_method, GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group, GLX_EXT_texture_from_pixmap OpenGL vendor string: ATI Technologies Inc. OpenGL renderer string: AMD Radeon HD 6800 Series OpenGL version string: 1.4 (2.1 (4.2.11762 Compatibility Profile Context)) OpenGL extensions: GL_ARB_depth_texture, GL_ARB_draw_buffers, GL_ARB_fragment_program, GL_ARB_fragment_program_shadow, GL_ARB_framebuffer_object, GL_ARB_imaging, GL_ARB_multisample, GL_ARB_multitexture, GL_ARB_occlusion_query, GL_ARB_point_parameters, GL_ARB_point_sprite, GL_ARB_shadow, GL_ARB_shadow_ambient, GL_ARB_texture_border_clamp, GL_ARB_texture_compression, GL_ARB_texture_cube_map, GL_ARB_texture_env_add, GL_ARB_texture_env_combine, GL_ARB_texture_env_crossbar, GL_ARB_texture_env_dot3, GL_ARB_texture_mirrored_repeat, GL_ARB_texture_non_power_of_two, GL_ARB_texture_rectangle, GL_ARB_transpose_matrix, GL_ARB_vertex_program, GL_ARB_window_pos, GL_EXT_abgr, GL_EXT_bgra, GL_EXT_blend_color, GL_EXT_blend_equation_separate, GL_EXT_blend_func_separate, GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_copy_texture, GL_EXT_draw_range_elements, GL_EXT_fog_coord, GL_EXT_framebuffer_blit, GL_EXT_framebuffer_multisample, GL_EXT_framebuffer_object, GL_EXT_multi_draw_arrays, GL_EXT_packed_pixels, GL_EXT_point_parameters, GL_EXT_rescale_normal, GL_EXT_secondary_color, GL_EXT_separate_specular_color, GL_EXT_shadow_funcs, GL_EXT_stencil_wrap, GL_EXT_subtexture, GL_EXT_texture3D, GL_EXT_texture_compression_s3tc, GL_EXT_texture_edge_clamp, GL_EXT_texture_env_add, GL_EXT_texture_env_combine, GL_EXT_texture_env_dot3, GL_EXT_texture_lod, GL_EXT_texture_lod_bias, GL_EXT_texture_mirror_clamp, GL_EXT_texture_object, GL_EXT_texture_rectangle, GL_EXT_vertex_array, GL_ATI_draw_buffers, GL_ATI_texture_env_combine3, GL_ATI_texture_mirror_once, GL_ATIX_texture_env_combine3, GL_IBM_texture_mirrored_repeat, GL_INGR_blend_func_separate, GL_NV_texture_rectangle, GL_SGIS_generate_mipmap, GL_SGIS_texture_border_clamp, GL_SGIS_texture_edge_clamp, GL_SGIS_texture_lod, GL_SGIX_shadow_ambient, GL_SUN_multi_draw_arrays

    Read the article

  • vsftpd: chroot_local_user causes GNU/TLS-error

    - by akrosikam
    Distro: Ubuntu 12.04.2 Server 32-bit Server client: vsftpd 2.3.5 (from default "main" repository) Problem: Since upgrading from Ubuntu 10.04 to Ubuntu 12.04 (nothing changed on client-side), vsftp has refused to make chroot-jails with the "chroot_local_user" directive on FTP(e/i)S-connections. Here's my vsftpd.conf: anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 dirmessage_enable=YES xferlog_enable=YES xferlog_std_format=YES ftpd_banner=How are you gentlemen. listen=YES pam_service_name=vsftpd userlist_enable=YES userlist_deny=NO tcp_wrappers=YES connect_from_port_20=YES ftp_data_port=20 listen_port=21 pasv_enable=YES pasv_promiscuous=NO pasv_min_port=4242 pasv_max_port=4252 pasv_addr_resolve=YES pasv_address=your.domain.com ssl_enable=YES allow_anon_ssl=NO force_local_logins_ssl=YES force_local_data_ssl=YES ssl_tlsv1=YES ssl_sslv2=NO ssl_sslv3=NO rsa_cert_file=/home/maw/ssl_ftp_test/vsftpd.pem rsa_private_key_file=/home/maw/ssl_ftp_test/vsftpd.pem debug_ssl=YES log_ftp_protocol=YES ssl_ciphers=HIGH chroot_local_user=NO How to reproduce: Have a working SSL/TLS-secured vsftpd-configuration (I suggest similar to the one above) ready. Try to connect with an FTP user client and upload some files. With my setup, the above listed config works well at this point. Edit /etc/vsftpd.conf and set chroot_local_user= to YES. Make sure that chroot_list_enable= and/or chroot_list_file= are not set. Comment them out if they are. Save and exit. Run sudo restart vsftpd (or sudo service vsftpd restart if you like) in a terminal. Try to connect with an FTP user client. You should see a message more or less like this: GnuTLS error -15: An unexpected TLS packet was received. This is an issue for me, as I do not want FTP-sessions to be able to list files outside the user's home folder. I have checked with several client-side apps, and I get the same results with every one of them. Filezilla is not so good regarding cipher methods nowadays, but as I am able to make an FTP(e)s-connection over TLS (as long as chroot'ing is disabled and ssl_ciphers is set to HIGH) I have a feeling ciphers are not the issue this time, and that I won't find the answer by tweaking configs on the client side. My vsftpd.log stays empty, even though debug_ssl and log_ftp_protocol are enabled, so no info there either.

    Read the article

  • Making a Case For The Command Line

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/06/30/making-a-case-for-the-command-line.aspxI have had an idea percolating in the back of my mind for over a year now that I’ve just recently started to implement. This idea relates to building out “internal tools” to ease the maintenance and on-going support of a software system. The system that I currently work on is (mostly) web-based, so we traditionally we have built these internal tools in the form of pages within the app that are only accessible by our developers and support personnel. These pages allow us to perform tasks within the system that, for one reason or another, we don’t want to let our end users perform (e.g. mass create/update/delete operations on data, flipping switches that turn paid modules of the system on or off, etc). When we try to build new tools like this we often struggle with the level of effort required to build them. Effort Required Creating a whole new page in an existing web application can be a fairly large undertaking. You need to create the page and ensure it will have a layout that is consistent with the other pages in the app. You need to decide what types of input controls need to go onto the page. You need to ensure that everything uses the same style as the rest of the site. You need to figure out what the text on the page should say. Then, when you figure out that you forgot about an input that should really be present you might have to go back and re-work the entire thing. Oh, and in addition to all of that, you still have to, you know, write the code that actually performs the task. Everything other than the code that performs the task at hand is just overhead. We don’t need a fancy date picker control in a nicely styled page for the vast majority of our internal tools. We don’t even really need a page, for that matter. We just need a way to issue a command to the application and have it, in turn, execute the code that we’ve written to accomplish a given task. All we really need is a simple console application! Plumbing Problems A former co-worker of mine, John Sonmez, always advocated the Unix philosophy for building internal tools: start with something that runs at the command line, and then build a UI on top of that if you need to. John’s idea has a lot of merit, and we tried building out some internal tools as simple Console applications. Unfortunately, this was often easier said that done. Doing a “File –> New Project” to build out a tool for a mature system can be pretty daunting because that new project is totally empty.  In our case, the web application code had a lot of of “plumbing” built in: it managed authentication and authorization, it handled database connection management for our multi-tenanted architecture, it managed all of the context that needs to follow a user around the application such as their timezone and regional/language settings. In addition, the configuration file for the web application  (a web.config in our case because this is an ASP .NET application) is large and would need to be reproduced into a similar configuration file for a Console application. While most of these problems are could be solved pretty easily with some refactoring of the codebase, building Console applications for internal tools still potentially suffers from one pretty big drawback: you’d have to execute them on a machine with network access to all of the needed resources. Obviously, our web servers can easily communicate the the database servers and can publish messages to our service bus, but the same is not true for all of our developer and support personnel workstations. We could have everyone run these tools remotely via RDP or SSH, but that’s a bit cumbersome and certainly a lot less convenient than having the tools built into the web application that is so easily accessible. Mix and Match So we need a way to build tools that are easily accessible via the web application but also don’t require the overhead of creating a user interface. This is where my idea comes into play: why not just build a command line interface into the web application? If it’s part of the web application we get all of the plumbing that comes along with that code, and we’re executing everything on the web servers which means we’ll have access to any external resources that we might need. Rather than having to incur the overhead of creating a brand new page for each tool that we want to build, we can create one new page that simply accepts a command in text form and executes it as a request on the web server. In this way, we can focus on writing the code to accomplish the task. If the tool ends up being heavily used, then (and only then) should we consider spending the time to build a better user experience around it. To be clear, I’m not trying to downplay the importance of building great user experiences into your system; we should all strive to provide the best UX possible to our end users. I’m only advocating this sort of bare-bones interface for internal consumption by the technical staff that builds and supports the software. This command line interface should be the “back end” to a highly polished and eye-pleasing public face. Implementation As I mentioned at the beginning of this post, this is an idea that I’ve had for awhile but have only recently started building out. I’ve outlined some general guidelines and design goals for this effort as follows: Text in, text out: In the interest of keeping things as simple as possible, I want this interface to be purely text-based. Users will submit commands as plain text, and the application will provide responses in plain text. Obviously this text will be “wrapped” within the context of HTTP requests and responses, but I don’t want to have to think about HTML or CSS when taking input from the user or displaying responses back to the user. Task-oriented code only: After building the initial “harness” for this interface, the only code that should need to be written to create a new internal tool should be code that is expressly needed to accomplish the task that the tool is intended to support. If we want to encourage and enable ourselves to build good tooling, we need to lower the barriers to entry as much as possible. Built-in documentation: One of the great things about most command line utilities is the ‘help’ switch that provides usage guidelines and details about the arguments that the utility accepts. Our web-based command line utility should allow us to build the documentation for these tools directly into the code of the tools themselves. I finally started trying to implement this idea when I heard about a fantastic open-source library called CLAP (Command Line Auto Parser) that lets me meet the guidelines outlined above. CLAP lets you define classes with public methods that can be easily invoked from the command line. Here’s a quick example of the code that would be needed to create a new tool to do something within your system: 1: public class CustomerTools 2: { 3: [Verb] 4: public void UpdateName(int customerId, string firstName, string lastName) 5: { 6: //invoke internal services/domain objects/hwatever to perform update 7: } 8: } This is just a regular class with a single public method (though you could have as many methods as you want). The method is decorated with the ‘Verb’ attribute that tells the CLAP library that it is a method that can be invoked from the command line. Here is how you would invoke that code: Parser.Run(args, new CustomerTools()); Note that ‘args’ is just a string[] that would normally be passed passed in from the static Main method of a Console application. Also, CLAP allows you to pass in multiple classes that define [Verb] methods so you can opt to organize the code that CLAP will invoke in any way that you like. You can invoke this code from a command line application like this: SomeExe UpdateName -customerId:123 -firstName:Jesse -lastName:Taber ‘SomeExe’ in this example just represents the name of .exe that is would be created from our Console application. CLAP then interprets the arguments passed in order to find the method that should be invoked and automatically parses out the parameters that need to be passed in. After a quick spike, I’ve found that invoking the ‘Parser’ class can be done from within the context of a web application just as easily as it can from within the ‘Main’ method entry point of a Console application. There are, however, a few sticking points that I’m working around: Splitting arguments into the ‘args’ array like the command line: When you invoke a standard .NET console application you get the arguments that were passed in by the user split into a handy array (this is the ‘args’ parameter referenced above). Generally speaking they get split by whitespace, but it’s also clever enough to handle things like ignoring whitespace in a phrase that is surrounded by quotes. We’ll need to re-create this logic within our web application so that we can give the ‘args’ value to CLAP just like a console application would. Providing a response to the user: If you were writing a console application, you might just use Console.WriteLine to provide responses to the user as to the progress and eventual outcome of the command. We can’t use Console.WriteLine within a web application, so I’ll need to find another way to provide feedback to the user. Preferably this approach would allow me to use the same handler classes from both a Console application and a web application, so some kind of strategy pattern will likely emerge from this effort. Submitting files: Often an internal tool needs to support doing some kind of operation in bulk, and the easiest way to submit the data needed to support the bulk operation is in a file. Getting the file uploaded and available to the CLAP handler classes will take a little bit of effort. Mimicking the console experience: This isn’t really a requirement so much as a “nice to have”. To start out, the command-line interface in the web application will probably be a single ‘textarea’ control with a button to submit the contents to a handler that will pass it along to CLAP to be parsed and run. I think it would be interesting to use some javascript and CSS trickery to change that page into something with more of a “shell” interface look and feel. I’ll be blogging more about this effort in the future and will include some code snippets (or maybe even a full blown example app) as I progress. I also think that I’ll probably end up either submitting some pull requests to the CLAP project or possibly forking/wrapping it into a more web-friendly package and open sourcing that.

    Read the article

  • PHP Browser Game Question - Pretty General Language Suitability and Approach Question

    - by JimBadger
    I'm developing a browser game, using PHP, but I'm unsure if the way I'm going about doing it is to be encouraged anymore. It's basically one of those MMOs where you level up various buildings and what have you, but, you then commit some abstract fighting entity that the game gives you, to an automated battle with another player (producing a textual, but hopefully amusing and varied combat report). Basically, as soon as two players agree to fight, PHP functions on the "fight.php" page run queries against a huge MySQL database, looking up all sorts of complicated fight moves and outcomes. There are about three hundred thousand combinations of combat stance, attack, move and defensive stances, so obviously this is quite a resource hungry process, and, on the super cheapo hosted server I'm using for development, it rapidly runs out of memory. The PHP script for the fight logic currently has about a thousand lines of code in it, and I'd say it's about half-finished as I try to add a bit of AI into the fight script. Is there a better way to do something this massive than simply having some functions in a PHP file calling the MySQL Database? I taught myself a modicum of PHP a while ago, and most of the stuff I read online (ages ago) about similar games was all PHP-based. but a) am I right to be using PHP at all, and b) am I missing some clever way of doing things that will somehow reduce server resource requirements? I'd consider non PHP alternatives but, if PHP is suitable, I'd rather stick to that, so there's no overhead of learning something new. I think I'd bite that bullet if it's the best option for a better game, though.

    Read the article

  • /lib/udev/net.agent causing high CPU usage

    - by Antoine Benkemoun
    We have a number of Soekris boxes running Debian Squeeze. They were installed through an automated process consisting of using deboostrap and copying it unto a Compact Flash card. We use puppet to manage the configuration of all these boxes. Before Debian Squeeze, they were running Voyage Linux which is just a "lighter" version of Debian. Since we have switched, we're seeing the /lib/udev/net.agent process take up an aweful lot of CPU. We have so far been unable to find any clue as to what this really does and why it's taking up some much CPU time. In htop we see the following : We are seeing absolutly no syslog messages related to this process so we're a bit lost... So, I am looking for pointers as to what this process does in general and what could be the potential cause of such CPU usage.

    Read the article

  • How can I get data off of a Corsair SSD?

    - by user1870398
    My Corsair SSD won't work and I have some critical data on it that I didn't back up (I needed to create a copy of my mechanical storage device, just forgot). The drive isn't detected by the OS or BIOS. I also tried it on another system, but all that happened was the OS failed to load (my guess was that it knew the drive was there, just couldn't read it). I tried powering it on without the data cable for a bit of time to see if it'd work again, but it didn't. Any ideas of how I can get the data off of this drive without having to send it in?

    Read the article

  • Windows 7 port Forwarding Issue

    - by Elliot
    I can't get port forwarding to work now that I am using windows 7 (64-bit). I am using a wireless connection (no wired connection available). I have the ports forwarded (IP has been double checked, router settings are confirmed), there is an exception for all of the programs in question in windows firewall, and in the resource monitor windows lists the ports as available, not restricted, and yet when I either use a specific program (ie utorrent, DC++, Command & Conquer 3) or check using firefox, the port reads as closed. How do I get the port forwarding to work?

    Read the article

  • Installazione ATI Mobility Radeon HD 5650 su Ubuntu 11.10

    - by Antonio
    Salve a tutti, possiedo un portatile HP Pavillion dv6 3110 con scheda video dedicata ATI Mobility Radeon HD 5650 da 1 Giga ed ho installato da poco Ubuntu 11.10 Versione 64 bit. Ho seguito molte guide su internet per installare i driver per la mia scheda video ma nessuna ha dato esito positivo. Nella finestra "driver aggiuntivi" sono riuscito ad installare i "Driver grafici fglrx proprietari ATI/AMD" ma dopo il riavvio non riesco ad utilizzare correttamente la scheda video. Mentre i "Driver grafici fglrx proprietari ATI/AMD (aggiornamenti post-release)" non me li fa proprio installare segnalando un errore che riporto di seguito "L'installazione di questo driver non è riuscita.Consultare i file di registro per maggiori informazioni: /var/log/jockey.log". Ho pensato allora di scaricare direttamente dal sito di AMD gli ultimi driver rilasciati attraverso il pacchetto "amd-driver-installer-12-3-x86.x86_64.run", l'ho lanciato, ho seguito il wizard di installazione, l'installazione viene completata, digito "sudo aticonfig --initial" per la configurazione iniziale, ma al riavvio del pc appaiono soltanto scritte su schermo nero con una serie di "OK" e qualche "FAIL". Ho provato questa procedura anche per le versioni precedenti dei driver, ma il risultato è sempre lo stesso. Sono disperato. Riuscirò mai ad utilizzare la mia scheda video? Vi incollo per completezza ciò che mi appare all'esecuzione del comando "lspci -nn | grep VGA" per visualizzare i processori grafici presenti sul mio pc: 00:02.0 VGA compatible controller [0300]: Intel Corporation Core Processor Integrated Graphics Controller [8086:0046] (rev 02) 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc Madison [AMD Radeon HD 5000M Series] [1002:68c1] Grazie anticipatamente a coloro che potranno aiutarmi. Cordiali Saluti Antonio Giordano

    Read the article

  • How do I restore a non-system hard drive using Time Machine under OSX?

    - by richardtallent
    I dropped one of the external drives on my Mac Pro and it started making noises... so I bought a replacement drive. No biggie, that's why I have Time Machine, right? So now that I have the new drive up and initialized, how do I actually restore the drive from backup? Time Machine is intuitive when it comes to restoring the system drive or restoring individual folders/files on the same literal device, but I'm a bit stuck in how to properly restore an entire drive that is not the boot drive. I saw one suggestion to use the same volume name as the old drive and then go into Time Machine. Haven't tried that since the information is unconfirmed. For now, I just went to the Time Machine volume, found the latest backup folder for that volume, and I'm copying the files via Finder. Of couse, I expect this to work just fine, but I feel like I'm missing something if that's the "proper" way to do this.

    Read the article

  • Creating the concept of Time

    - by Jamie Dixon
    So I've reached the point in my exploration of gaming where I'd like to impliment the concept of time into my little demo I've been building. What are some common methodologies for creating the concept of time passing within a game? My thoughts so far: My game loop tendes to spend a fair bit of time sitting around waiting or user input so any time system will likely need to be run in a seperate thread. What I've currently done is create a BackgroundWorker passing in a method that contains a loop triggering every second. This is working fine and I can output information to the console from here etc. Inside this loop I have a DateTime object that is incrimented by 1 minute for every realtime second. (the game begins in the year 01/01/01) Is this a standard way of acheiving this result or are there more tried and tested methods? I'm also curious about how to go about performing time based actions (reducing player energy, moving entities around the game board, life/death etc). Thanks for any pointers or advice. I've searched around however I'm not familiar enough with the terms and so my searches are yeilding little result on this one.

    Read the article

  • Windows Vista Password changed... What the?

    - by Moshe
    My mom's Vista Home Premium (32 bit) password was changed. My mom said that she didn't change it and she doesn't think anyone else here did either. So.. Could this have been done remotely? I'm running Ophcrack now, what else can we do? (I haven't tried safe mode yet.) I'm a techie and somewhat baffled. Help! Edit: ophcrack found empty LM but no NT hash is displayed. Entering safe mode... Edit2: I'm an idiot. Well sort of. Ophcrack could not crack the password which was just lowercase English letters, but for some reason, I was able to login using the orignal password in safe mode. Once in safe mode I "changed the password" back to it's original value and was then able to login in regular mode... It's time to run a virus scan.

    Read the article

  • Looking for zsh completion file for osX native commands

    - by Chiggsy
    I've been digging deep into what actually comes with osX in /usr/bin and especially /usr/libexec. Quite good stuff really, although the command syntax is a bit.. odd. Let me direct the curious to the command that made me think of this: networksetup -printcommands I can not think of a command that better illustrates the need for good completion. security -h perhaps, but those commands have a familiar easy-to-read format. I beseech the community, please point me to a place where I can find such a thing. I never type them right, and I ache for tab completion for this Anyone have any idea where I could grab something? I'd prefer to stand on the shoulders of giants instead of trying to make a zsh/bash completion script leap into the world, ready for battle, like Athena, from my forehead. I am no Zeus when it comes to compctl. Not at all.

    Read the article

  • Very poor battery life on Lenovo ThinkPad W500 laptop

    - by Matt
    I have a new ThinkPad W500 laptop (w/ 9 cell battery) running Windows 7 RTM 64-bit. All drivers* and BIOS are the latest. Battery life appeared poor so I performed several tests under the following conditions: Battery starts with 100% charge Screen on minimum brightness Screen saver running Wifi n enabled and active "Normal" set of programs running including Outlook 2007, FeedDemon, TweetDeck and antivirus Laptop left untouched during tests Under the above conditions, I clocked under 2 hours of battery life across 3 tests (1:49, 1:52, 1:47). If I actually use the computer, we're looking at 1:30. Something is not right... The smoking gun here is that Lenovo hasn't officially released Windows 7 drivers for this laptop. I haven't tried with Vista or XP yet. What are others seeing? Update: For W500 owners w/ the 9 cell battery, what value do you see for "Full charge capacity" when on the Battery tab of the Power Manager utility? I see 81.87 Wh.

    Read the article

  • Geek it Up

    - by BuckWoody
    I’ve run into a couple of kinds of folks in IT. Some really like technology a lot – a whole lot –and others treat it more as a job. For those of you in the second camp, you can go back to your drab, meaningless jobs – this post is for the first group. I’m a geek. Not a little bit of a geek, a really big one. I love technology, I get excited about science and electronics in general, and I read math books when I don’t have to. Yes, I have a Star Trek item or two around the house. My daughter is fluent in both Monty Python AND Serenity. I totally admit it. So if you’re like me (OK, maybe a little less geeky than that), then go for it. Put those toys in your cubicle, wear your fan shirt, but most of all, geek up your tools. No, this isn’t an April Fool’s post – I really mean it. I’ve noticed that when I get the larger monitor, better mouse, cooler keyboard, I LIKE coming to work. It’s a way to reward yourself – I’ve even found that it makes work easier if I have the kind of things I enjoy around to work with. So buy that old “clicky” IBM keyboard, get three monitors, and buy a nice headset so that you can set all of your sounds to Monty Python WAV’s. And get to work. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • git init --bare permission denied on 16gb USB stick

    - by Sour Lemon
    I am using GIT on a Windows 7 machine (64 bit) and have been learning how to use GIT to version control my files. Now I want to be able to create a --bare repository on an external device (in this case a 16gb USB stick) but unfortunately when I try to create a --bare repository on it I get the following error: f:/: Permission denied I am using the GIT Bash program which is installed with GIT on Windows machines, so these are the commands I am typing in (I am also opening the program as administrator by holding ctrl + shift when I open it) cd /f git init --bare f:/: Permission denied However if I create a normal repository it works just fine: cd /f git init Initialized empty repository in f:/.git/ Can anybody shed some light on why I can't create a --bare repository? Any help would be much appreciated.

    Read the article

  • Making of Amar Chitra Katha Comics – Behind The Scenes

    - by Gopinath
    Couple of days ago there was a book fair at our work place and found very interesting comic books for children in a crowded stall. The comic are mostly based on Indian mythologies, historians and the legends who inspired the generations. By flicking couple of book pages I fell in love with them and decided that my next gifts I’m going to buy for kids is going to be one of these comic books. After speaking to the sales men, I came to know that these comic books are very popular in among the kids and elders by the name, Amar Chitra Katha. I found myself ashamed of being ignorant about these popular books and started doing a bit of research. The Amar Chitra Katha comics is started in 1967 by Anant Pai, who was recently honoured by Google India with a doodle. The comic books are published in 20 Indian languages, 440+ titles are released so far and over a million copies are sold since the inception. Here is an interesting video i found on labnol blog that takes us through the journey of behind the scenes making Amar Chitra Katha comics You can browse through the collection of comics on their website amarchitrakatha.com and place orders for free shipping around the globe. For those who care to teach children about the history and great leaders of India, I strongly recommend them to gift of Amar Chitra Katha. This article titled,Making of Amar Chitra Katha Comics – Behind The Scenes, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Email setup on dedicated servers

    - by zaf
    Am thinking seriously of renting a dedicated server. Now I know how to setup apache and the underlying scripting engines and databases but I'm a bit clueless with how the emails would work. Currently, I'm on a shared hosting account and I get a fancy gui which allows me to nicely add a domain, setup nameservers and then the emails for all domain names with either simple forwarding or the full account which also has a webmail app behind it. What options do I have? Are there non complicated ways to have the same email setup experience? Or are there reliable external providers I could use? My past experiences with sendmail/postfix have always been fuzzy - not exactly knowing whats happening behind the scenes.

    Read the article

  • Dutch ACEs SOA Partner Community Award Celebration

    - by JuergenKress
    When you win you need to celebrate. This was the line of thinking when I found out that I was part of a group that won the Oracle SOA Community Country Award. Well – thinking about a party is one thing, preparing it and finally having the small party is something completely different. It starts with finding a date that would be suitable for the majority of invited people. As you can imagine the SOA ACEs and ACE Directors have a busy life, that takes them places. Alongside that they are engaged with customers who want to squeeze every bit of knowledge out of them. So everybody is pretty busy (that’s what makes you an ACE). After some deliberation (and checks of international Oracle events, Trip-it, blogs and tweets) a date was chosen. Meeting on a Friday evening for some drinks is probably not a Dutch-only activity. But as some of the ACEs are self-employed they miss the companies around them to organize such events. Come the day a turn-out of almost 50% was great – although I expected some more folks . This was mainly due to some illness and work overload. Luckily the mini-party got going, (alcoholic) beverages were consumed, food was appreciated, a decent picture was made (see below) and all had a good chat and hopefully a good time. (Above from left to right: Eric Elzinga, Andreas Chatziantoniou, Mike van Aalst, Edwin Biemond) All in all a nice evening and certainly a "meeting" which can be repeated.  For the full article please visit Andreas's blog Want to organize a local SOA & BPM community? Let us know we are more than happy to support you! To receive more information become a member of the SOA & BPM Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: Eric Elzinga,Andreas Chatziantoniou,Mike van Aalst,Edwin Biemond,Dutsch SOA Community,SOA Community,Oracle,OPN,Jürgen Kress,ACE

    Read the article

  • What are some hosted MySQL solutions?

    - by bigmac
    I would like to host a MySQL database somewhere. The basic solution would be to get a VPS, but then I am stuck doing all the sysadmin stuff. It's not terribly difficult if you are not doing anything fancy, but if something goes wrong (like a security breach), it may be hard to resolve or even diagnose in the first place. On the other end of the spectrum, I found Xeround and Amazon RDS. Xeround is not exactly standard MySQL. Amazon seems a bit complex to get started (which is weird because they specifically say it is "simple to deploy"). Why is there nothing in the middle? Is there no demand for a basic hosted MySQL solution?

    Read the article

  • Wireless networking on Gnome on Ubuntu 9 / 10

    - by WaveyDavey
    So here's my problem: I have some netbooks (ASUS eee, and ACER Aspire Ones) that I've been tasked to set up as kiosk machines, locked up tight for normal users. I am a command-line, server man, so this gnome malarkey is all a bit new to me. I found a lovely 9.04 kiosk livecd that installs and runs exactly as I want it to, but I can't get the wireless working. So I dropped on a full 10.4 distro, and wireless works straight out of the box (so hardware is good) - all I needed to do was right-click on the network connection icon, enter my SSID and password (WPA/WPA2) and away it went, perfect. Further investigation on 10.4 distro shows that /etc/networking/interfaces is virtually empty (just auto lo iface lo inet loopback in it), even after I have set up the wireless thru the gnome taskbar applet (is that the right word?). So where does gnome / ubuntu store the network settings to bring the blasted wireless connection up, and what do I need to do on the kiosk version to get wireless running?

    Read the article

  • Correct way to drive Main Loop in Cocoa

    - by Kyle
    I'm writing a game that currently runs in both Windows and Mac OS X. My main game loop looks like this: while(running) { ProcessOSMessages(); // Using Peek/Translate message in Win32 // and nextEventMatchingMask in Cocoa GameUpdate(); GameRender(); } Thats obviously simplified a bit, but thats the gist of it. In Windows where I have full control over the application, it works great. Unfortunately Apple has their own way of doing things in Cocoa apps. When I first tried to implement my main loop in Cocoa, I couldn't figure out where to put it so I created my own NSApplication per this post. I threw my GameFrame() right in my run function and everything worked correctly. However, I don't feel like its the "right" way to do it. I would like to play nicely within Apple's ecosystem rather than trying to hack a solution that works. This article from apple describes the old way to do it, with an NSTimer, and the "new" way to do it using CVDisplayLink. I've hooked up the CVDisplayLink version, but it just feels....odd. I don't like the idea of my game being driven by the display rather than the other way around. Are my only two options to use a CVDisplayLink or overwrite my own NSApplication? Neither one of those solutions feels quite right.

    Read the article

  • What is a plain text password and why can it be decypted?

    - by Misha
    I was trying to understand the level of security offered by Windows picture passwords and ran across this claim on this website. Some of our password recovery utilities already implement Windows 8 plain-text password decryption. The upcoming release of Windows Password Recovery is expected to have a full-fledged Vault analyzer and offline decoder. I'm trying to understand what a plain text password is and if it is the default kind of password when I add a password to my account. My head is a bit muddled on this one so any clarification can help. It seems there are passwords that can be decrypted and those that can't. What can be decrypted? Is the password I enter in Windows exposed?

    Read the article

  • Is “Application Programming Interface” a bad name?

    - by Taylor Hawkes
    Application programming interface seems like a bad name for what it is. Is there a reason it was named such? I understand that people used to call them Advanced Programming Interfaces and then renamed to Application Programming Interface. Is that why it is poorly named? Why is it not named Application (to) Programmer Interface. I guess I'm just confused of the meaning behind that name? I write more about my confusion around the name here: BREAKING DOWN THE WORD “APPLICATION PROGRAMMING INTERFACE” This is a very confusing word. We mostly understand what the word Interface means, but “Application Programming”, what even is that. Honestly I'm confused. Is that suppose to be two words like “Application”, “Programming” and then the “Interface” is suppose to mean between the two? Like would a “Computer Human Interface” be an interface between a “Computer” and a “Human” (monitor , keyboard, mouse ) or is a “Computer Human” a real thing - perhaps the terminator. So a CHI is our boy Kyle Reese who is the only way we are able to work with the computer human. I think more likely “Application Programming Interface” was simply poorly named and doesn't really make sense. It was originally called an “Advanced Programming Interface” , but perhaps being a bit to ostentatious merged into the now wildly accepted “Application Programming Interface”. So now, not wanting to change an acronym has confused the living heck out everyone.... Any thoughts or clarification would be great, I'm giving a lecture on this topic in a month, so I would prefer not to BS my way through it.

    Read the article

  • Tools of the Trade

    - by Ajarn Mark Caldwell
    I got pretty excited a couple of days ago when my new laptop arrived. “The new phone books are here!  The new phone books are here!  I’m a somebody!” - Steve Martin in The Jerk It is a Dell Precision M4500 with an Intel i7 Core 2.8 GHZ running 64-bit Windows 7 with a 15.6” widescreen, 8 GB RAM, 256 GB SSD.  For some of you high fliers, this may be nothing to write home about, but compared to the 32–bit Windows XP laptop with 2 GB of RAM and a regular hard disk that I’m coming from, it’s a really nice step forward.  I won’t even bore you with the details of the desktop PC I was first given when I started here 5 1/2 years ago.  Let’s just say that things have improved.  One really nice thing is that while we are definitely running a lean and mean department in terms of staffing, my boss believes in supporting that lean staff with good tools in order to stay lean instead of having to spend even more money on additional employees.  Of course, that only goes so far, and at some point you have to add more people in order to get more work done, which is why we are bringing on-board a new employee and a new contract developer next week.  But that’s a different story for a different time. But the main topic for this post is to highlight the variety of tools that I use in my job and that you might find useful, too.  This is easy to do right now because the process of building up my new laptop from scratch has forced me to assemble a list of software that had to be installed and configured.  Keep in mind as you look through this list that I play many roles in our company.  My official title is Software Engineering Manager, but in addition to managing the team, I am also an active ASP.NET and SQL developer, the Database Administrator, and 50% of the SAN Administrator team.  So, without further ado, here are the tools and some comments about why I use them: Tool Purpose Virtual Clone Drive Easily mount an ISO image as a DVD Drive.  This is particularly handy when you are downloading disk images from Microsoft for your tools. SQL Server 2008 R2 Developer Edition We are migrating all of our active systems to SQL 2008 R2.  Developer Edition has all the features of Enterprise Edition, but intended for development use. SQL Server 2005 Developer Edition (BIDS ONLY) The migration to SSRS 2008 R2 is just getting started, and in the meantime, maintenance work still has to be done on the reports on our SQL 2005 server.  For some reason, you can’t use BIDS from 2008 to write reports for a 2005 server.  There is some different format and when you open 2005 reports in 2008 BIDS, it forces you to upgrade, and they can no longer be uploaded to a 2005 server.  Hopefully Microsoft will fix this soon in some manner similar to Visual Studio now allows you to pick which version of the .NET Framework you are coding against. Visual Studio 2010 Premium All of our application development is in ASP.NET, and we might as well use the tool designed for it. I’ve used a version of Visual Studio going all the way back to VB 6.0 and Visual Interdev. Vault Professional Client Several years ago we replaced Visual Source Safe with SourceGear Vault (then Fortress, and now Vault Pro), and I love it.  It is very reliable with low overhead - perfect for a small to medium size development team.  And being a small ISV, their support is exceptional. Red-Gate Developer Bundle with the SQL Source Control update for Vault I first used, and fell in love with, SQL Prompt shortly before Red-Gate bought it, and then Red-Gate’s first release made me love it even more.  SQL Refactor (which has since been rolled into the latest version of SQL Prompt) has saved me many hours and migraine’s trying to understand somebody else’s code when their indenting was nonexistant, or worse, irrational.  SQL Compare has been awesome for troubleshooting potential schema issues between different instances of system databases.  SQL Data Compare helped us identify the cause behind a bug which appeared in PROD but could not be reproduced in a nearly (but not quite exactly) identical copy in UAT.  And the newest tool we are embracing: SQL Source Control.  I blogged about it here (and here, and here) last December.  This is really going to help us keep each developer’s copy of the database in sync with one another. Fiddler Helps you watch the whole traffic stream on web visits.  Haven’t used it a lot, but it did help me track down some odd 404 errors we were finding in our own application logs.  Has some other JavaScript troubleshooting capabilities, but some of its usefulness has been supplanted by the Developer Tools option in IE8. Funduc Search & Replace Find any string anywhere in a mound of source code really, really fast.  Does RegEx searches, if you understand that foreign language.  Has really helped with some refactoring work to pinpoint, for example, everywhere a particular stored procedure is referenced, whether in .NET code or other SQL procedures (which we have in script files).  Provides in-context preview of the search results.  Fantastic tool, and a bargain price. SciTE SciTE is a Scintilla based Text Editor and it is a fantastic, light-weight tool for quickly reviewing (or writing) program code, SQL scripts, and extract files.  It has language-specific syntax highlighting.  I used it to write several batch and CMD programs a year ago, and to examine data extract files for exchanging information with other systems.  Extremely handy are the options to View End of Line and View Whitespace.  Ever receive a file that is supposed to use CRLF as an end-of-line marker, but really only has CRs?  SciTE will quickly make that visible. Infragistics Controls We do a lot of ASP.NET development, and frequently use the WebGrid, WebTab, and date picker controls.  We will likely be implementing the Hierarchical Data Grid soon.  Infragistics has control suites for WebForms, WinForms, Silverlight, and coming soon MVC/JQuery. WinZip - WITH Command-Line add-in The classic compression program with a great command-line interface that allows me to build those CMD (and soon PowerShell) programs for automated compression jobs.  Our versioned Build packages are zip files. XML Notepad Haven’t used this a lot myself, but one of my team really likes it for examining large XML files. LINQPad Again, haven’t used this one a lot, but it was recommended to me for learning and practicing my LINQ skills which will come in handy as we implement Entity Framework. SQL Sentry Plan Explorer SQL Server Show Plan on steroids.  Great for helping you focus on the parts of a large query that are of most importance.  Also great for just compressing the graphical plan into more readable layout. Araxis Merge A great DIFF and Merge tool.  SourceGear provides a great tool called DiffMerge that we use all the time, but occasionally, I like the cross-edit capabilities of Araxis Merge.  For a while, we also produced DIFF reports in HTML that showed all the changes that occurred between two releases.  This was most important when we were putting out very small, but very important hot fixes on a very politically hot system.  The reports produced by Araxis Merge gave the Director of IS assurance that we were not accidentally introducing ripples throughout the system with our releases. Idera SQL Admin Toolset A great collection of tools including a password checker to help analyze your SQL Server for weak user passwords, a Backup Status tool to quickly scan a large list of servers and databases to identify any that are overdue for backups.  Particularly helpful for highlighting new databases that have been deployed without getting included in your backup processing.  I also like Space Analyzer to keep an eye on disk space consumed by database files. Idera SQL Job Manager This free tool provides a nice calendar view of SQL Server Job Schedules, but to a degree, you also get what you pay for.  We will be purchasing SQL Sentry Event Manager later this year as an even better job schedule reviewer/manager.  But in the meantime, this at least gives me a good view on potential resource conflicts across multiple instances of SQL Server. DBFViewer 2000 I inherited a couple of FoxPro databases that I have to keep an eye on occasionally and have not yet been able to migrate them to SQL Server. Balsamiq Mockups We are still in evaluation-mode on this tool, but I really like it as a quick UI mockup tool that does not require Visual Studio, so someone other than a programmer can do UI design.  The interface looks hand-drawn which definitely has some psychological benefits when communicating to users, too. FeedDemon I have to stay on top of my WAY TOO MANY blog subscriptions somehow.  I may read blogs on a couple of different computers, and FeedDemon’s integration with Google Reader allows me to keep them all in sync.  I don’t particularly like the Google Reader interface, or the fact that it always wanted to mark articles as read just because I scrolled past them.  FeedDemon solves this problem for me, and provides a multi-tabbed interface which is good because fairly frequently one blog will link to something else I want to read, and I can end up with a half-dozen open tabs all from one article. Synergy+ In my office, I run four monitors across two computers all with one mouse and keyboard.  Synergy is the magic software that makes this work. TweetDeck I’m not the most active Tweeter in the world, but when I want to check-in with the Twitterverse, this really helps.  I have found the #sqlhelp and #PoshHelp hash tags particularly useful, and I also have columns setup to make it easy to monitor #sqlpass, #PASSProfDev, and short term events like #sqlsat68.   Whew!  That’s a lot.  No wonder it took me a couple of days to get everything setup the way I wanted it.  Oh, that and actually getting some work accomplished at the same time.  Anyway, I know that is a huge dump of info, and most people never make it here to the end, so for those who did, let me say, CONGRATULATIONS, you made it! I hope you’ll find a new tool or two to make your work life a little easier.

    Read the article

  • After upgrading to 12.04 the scanner from Brother Printer MFC-290C does not work

    - by Lorenzo
    I upgraded Ubuntu to 12.04 from 11.10. The printer works, but the scanner doesn't now. In 11.10 I had to install a special driver from Brother. The printer's model is Brother MFC-290C. The computer is a Toshiba Satellite. How can I get the scanner working? Update: I have a 64-bit installation on the Toshiba Satellite. Thank you for your instructions, Chad--24216. I followed each step: 1 through 5. I also updated the Brother Linux scanner S-KEY tool. The output of dpkg -l | grep Brother is: ii brscan-skey 0.2.3-0 Brother Linux scanner S-KEY tool ii brscan3 0.2.11-5 Brother Scanner Driver ii mfc290ccupswrapper:i386 1.1.2-2 Brother CUPS Inkjet Printer Definitions ii mfc290clpr:i386 1.1.2-2 Brother lpr Inkjet Printer Definitions ii printer-driver-ptouch 1.3-3ubuntu0.1 printer driver Brother P-touch label printers Still the scanner does not work. Here is the message from Xsane: Failed to open device brother3:bus6;dev1: Invalid argument. Here is the message from Simple Scan: Failed to scan. Unable to connect to scanner. And Scan Utility still doesn't display the scanner line.

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >