Search Results

Search found 37604 results on 1505 pages for 'build script'.

Page 737/1505 | < Previous Page | 733 734 735 736 737 738 739 740 741 742 743 744  | Next Page >

  • Solar Case Mod Powers Raspberry Pi FTP Server with Sunshine

    - by Jason Fitzpatrick
    This project combines a solar panel, Raspberry Pi, and a bit of code for the Pi to turn the whole array into a solar powered server (you could easily modify the project to become a solar powered music player or other device). The case mod comes to us courtesy of tinker CottonPickers–he shares the build and offers the cases for sale here. Building off the solar case, David Hayward at CNET UK added on an FTP server so that the Pi can serve as a tiny, take-anywhere, power-outlet optional, file sharing hub. Hit up the link below for the FTP configuration instructions. How to Make a Raspberry Pi Solar-Powered FTP Server [CNET UK] How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot Our Geek Trivia App for Windows 8 is Now Available Everywhere

    Read the article

  • New Tuxedo White Papers

    - by todd.little
    As part of the Tuxedo 11gR1 release, I've written two new white papers on Tuxedo. One is called "Tuxedo in a SOA World" and discusses how Tuxedo fits into SOA based applications. It covers most of the various connectivity options from Tuxedo into SOA environments and gives guidance as to which connectivity options are best suited for a particular application requirement. The other white paper "SCA: Bringing Modern SOA Programing to Tuxedo" is of a more technical bent and focuses on using the SCA features in SALT to easily build SOA based applications on Tuxedo without using a lot of technical APIs. In fact, services built using SALT's SCA support don't require any technical APIs, just pure business logic, and SCA clients need at most a couple of API calls, simply to look up a service. You can find these two new white papers as well as some additional white papers at http://www.oracle.com/technology/products/tuxedo/index.html.

    Read the article

  • SEO question, getting info about websites

    - by michael
    I don't know much about SEO. I have a csv file with 200,000 links to websites i want to add another field (or maybe couple of them) to each link in the csv file with page ranking of each link and maybe other interesting metrics and info about the link. I saw a free API from http://apiwiki.seomoz.org/ i can maybe use to build a simple script, but it's limited to 3 links for second which will roughly take 1100 minutes or 18 hours to run any other ideas how to get this kind of simple metrics about each link ? thanks !

    Read the article

  • why must i uninstall libavcodec53 and libavutil51 to install ubuntu restricted extras

    - by honestann
    When I try to install "ubuntu restricted extras" in "ubuntu software center", it displays a warning dialog that says the following items must be removed: libavcodec53 libavutil51 Why? And if I choose to install "ubuntu restricted extras", what will I lose? PS: I think I noticed libavcodec53 flash past as my daily build of codeblocks package was installing... so that's one possibility. Will I break my software development environment if I install "ubuntu restricted extras"? Or do these packages need to be removed because they are included in "ubuntu restricted extras"? If so, why doesn't the dialog mention that (and remove the worry and confusion)? PS: The output generated by "apt-get -s install ubuntu-restricted-extras" is HERE.

    Read the article

  • 2010 Collaboration Summit Impressions

    - by Elena Zannoni
    It's a bit late, but there you have it anyway. April 14 to 16 I attended the Linux Foundation Collaboration Summit in SFO. I was running two tracks, one on tracing and one on tools. You can see the tracks and the slides here: http://events.linuxfoundation.org/events/collaboration-summit/slides I was pretty busy both days, Thursday with a whole day tracing track, Friday with a half day toolchain track. The sessions were well attended, the rooms were full, with people spilling in the hallways. Some new things were presented, like Kernelshark, by Steve Rostedt, a GUI (yes, believe it or not, a GUI) written in GTK. It is very nice, showing a timeline for traced kernel events, and you can zoom in and filter at will. It works on the latest kernels, and it requires some new things/fixes in GTK. I don't recall exactly what version of GTK though. Dominique Toupin from Ericsson presented something about user requirements for tracing. Mostly though about who's who in the embedded world, and eclipse. Masami and Mathieu presented an update on their work. See their slides. The interesting thing to me was of course the new version of uprobes w/o underlying utrace presented by Jim Keniston. At the end of the session we had a discussion about the future of utrace. Roland wasn't there, butTom Tromey (also from RedHat) collected the feedback. Basically we are at a standstill now that utrace has been rejected yet again. There wasn't much advise that anybody could give, except jokingly, we decided that the only way in is to make it a part of perf events. There needs to be another refactoring, but most of all, this "killer app" that would be enabled because of utrace hasn't materialized yet. We think that having a good debugging story on Linux is enough of a killer app, for instance allowing multiple tracers, and not relying on SIGCHLD etc. I think this wasn't completely clear to the kernel community. Trying to achieve debugging via a gdb stub inside the kernel interfacing to utrace and that is controlled via the gdb remote protocol also lost its appeal (thankfully, since the gdb remote protocol is archaic). Somebody would have to be creative in how to submit utrace. It doesn't have to be called utrace (it was really a random choice, for lack of a letter that was not already used in front of the word "trace"). So basically, I think the ideas behind utrace are sound, and the necessity of a new interface is acknowledged. But I believe the integration/submission process with the kernel folks has to restart from scratch, clean slate. We'll see. There are many conferences and meetings coming up in the near future where things can be discussed further. On the second day, Friday, we had the tools talks. It was interesting to observe the more "kernel" oriented people's behavior towards the gcc etc community. The first talk was by Mark Mitchell, about Gcc and its new plugin architecture. After that, Paolo talked about the new C++1x standard, which will be finalized in 2011. Many features are already implemented in the libstdc++ library and gcc and usable today. We had a few minutes (really, the half day track was quite short) where Bradley Kuhn from the Software Freedom Law Center explained the GPLv3 exception for gcc (due to the new gcc plugin architecture and the availability of the intermediate results from the compilation, which is a new thing). I will not try to explain, but basically you cannot take the result of the preprocessing and then use that in your own proprietary compiler. After, we had a talk by Ian Taylor about the new Gold linker. One good thing in that area is that they are trying to make gold the new default linker (for instance Fedora will use gold as the distro linker). However gold is very different from binutils' old linker. It doesn't use a linker script, for instance. The kernel has been linked with gold many times as an exercise (the ground work was done by Kris Van Hees), but this needs to be constantly tested/monitored because the kernel linker script is very complex, and uses esoteric features (Wenji is now monitoring that each kernel RC can be built with gold). It was positive that people are now aware of gold and the need for it to be ported to more architectures. It seems that the porting is very easy, with little arch dependent code. Finally Tom Tromey presented about gdb and the archer project. Archer is a development branch of gdb mostly done by RedHat, where they are focusing on better c++ printing, c++ expression parsing, and plugins. The archer work is merged regularly in the gdb mainline. In general it was a good conference. I did miss most of the first day, because that's when I flew in. But I caught a couple of talks. Nothing earth shattering, except for Google giving each person registered a free Android phone. Yey.

    Read the article

  • Leveraging Microsoft Patterns and Practices

    - by Tim Murphy
    I want to bring the Patterns and Practices group to the attention of those who have not already been exposed.  I have been a fan of the P&P team since they came out with the original Application Blocks which eventually turned into the Enterprise Library.  Their main purpose is to assemble guidance and tools that make it easier for all of us to build amazing solutions.  I would simply suggest you spend some time exploring the information and code libraries that they have produced.  Free resources are always a great find and I have used a number of the P&P solutions over the years with success.  If nothing else you may find some new ideas.  Enjoy. http://msdn.microsoft.com/en-us/practices/                           del.icio.us Tags: Patterns and Practices,Microsoft,Architecture,software development

    Read the article

  • Windows Azure Use Case: Hybrid Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Organizations see the need for computing infrastructures that they can “rent” or pay for only when they need them. They also understand the benefits of distributed computing, but do not want to create this infrastructure themselves. However, they may have considerations that prevent them from moving all of their current IT investment to a distributed environment: Private data (do not want to send or store sensitive data off-site) High dollar investment in current infrastructure Applications currently running well, but may need additional periodic capacity Current applications not designed in a stateless fashion In these situations, a “hybrid” approach works best. In fact, with Windows Azure, a hybrid approach is an optimal way to implement distributed computing even when the stipulations above do not apply. Keeping a majority of the computing function in an organization local while exploring and expanding that footprint into Windows and SQL Azure is a good migration or expansion strategy. A “hybrid” architecture merely means that part of a computing cycle is shared between two architectures. For instance, some level of computing might be done in a Windows Azure web-based application, while the data is stored locally at the organization. Implementation: There are multiple methods for implementing a hybrid architecture, in a spectrum from very little interaction from the local infrastructure to Windows or SQL Azure. The patterns fall into two broad schemas, and even these can be mixed. 1. Client-Centric Hybrid Patterns In this pattern, programs are coded such that the client system sends queries or compute requests to multiple systems. The “client” in this case might be a web-based codeset actually stored on another system (which acts as a client, the user’s device serving as the presentation layer) or a compiled program. In either case, the code on the client requestor carries the burden of defining the layout of the requests. While this pattern is often the easiest to code, it’s the most brittle. Any change in the architecture must be reflected on each client, but this can be mitigated by using a centralized system as the client such as in the web scenario. 2. System-Centric Hybrid Patterns Another approach is to create a distributed architecture by turning on-site systems into “services” that can be called from Windows Azure using the service Bus or the Access Control Services (ACS) capabilities. Code calls from a series of in-process client application. In this pattern you move the “client” interface into the server application logic. If you do not wish to change the application itself, you can “layer” the results of the code return using a product (such as Microsoft BizTalk) that exposes a Web Services Definition Language (WSDL) endpoint to Windows Azure using the Application Fabric. In effect, this is similar to creating a Service Oriented Architecture (SOA) environment, and has the advantage of de-coupling your computing architecture. If each system offers a “service” of the results of some software processing, the operating system or platform becomes immaterial, assuming it adheres to a service contract. There are important considerations when you federate a system, whether to Windows or SQL Azure or any other distributed architecture. While these considerations are consistent with coding any application for distributed computing, they are especially important for a hybrid application. Connection resiliency - Applications on-premise normally have low-latency and good connection properties, something you’re not always guaranteed in a distributed and hybrid application. Whether a centralized client or a distributed one, the code should be able to handle extended retry logic. Authorization and Access - In a single authorization environment like a Active Directory domain, security is handled at a user-password level. In a distributed computing environment, you have more options. You can mitigate this with  using The Windows Azure Application Fabric feature of ACS to make the Azure application aware of the App Fabric as an ADFS provider. However, a claims-based authentication structure is often a superior choice.  Consistency and Concurrency - When you have a Relational Database Management System (RDBMS), Consistency and Concurrency are part of the design. In a Service Architecture, you need to plan for sequential message handling and lifecycle. Resources: How to Build a Hybrid On-Premise/In Cloud Application: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx  General Architecture guidance: http://blogs.msdn.com/b/buckwoody/archive/2010/12/21/windows-azure-learning-plan-architecture.aspx   

    Read the article

  • Integrate BING API for Search inside ASP.Net web application

    - by sreejukg
    As you might already know, Bing is the Microsoft Search engine and is getting popular day by day. Bing offers APIs that can be integrated into your website to increase your website functionality. At this moment, there are two important APIs available. They are Bing Search API Bing Maps The Search API enables you to build applications that utilize Bing’s technology. The API allows you to search multiple source types such as web; images, video etc. and supports various output prototypes such as JSON, XML, and SOAP. Also you will be able to customize the search results as you wish for your public facing website. Bing Maps API allows you to build robust applications that use Bing Maps. In this article I am going to describe, how you can integrate Bing search into your website. In order to start using Bing, First you need to sign in to http://www.bing.com/toolbox/bingdeveloper/ using your windows live credentials. Click on the Sign in button, you will be asked to enter your windows live credentials. Once signed in you will be redirected to the Developer page. Here you can create applications and get AppID for each application. Since I am a first time user, I don’t have any applications added. Click on the Add button to add a new application. You will be asked to enter certain details about your application. The fields are straight forward, only thing you need to note is the website field, here you need to enter the website address from where you are going to use this application, and this field is optional too. Of course you need to agree on the terms and conditions and then click Save. Once you click on save, the application will be created and application ID will be available for your use. Now we got the APP Id. Basically Bing supports three protocols. They are JSON, XML and SOAP. JSON is useful if you want to call the search requests directly from the browser and use JavaScript to parse the results, thus JSON is the favorite choice for AJAX application. XML is the alternative for applications that does not support SOAP, e.g. flash/ Silverlight etc. SOAP is ideal for strongly typed languages and gives a request/response object model. In this article I am going to demonstrate how to search BING API using SOAP protocol from an ASP.Net application. For the purpose of this demonstration, I am going to create an ASP.Net project and implement the search functionality in an aspx page. Open Visual Studio, navigate to File-> New Project, select ASP.Net empty web application, I named the project as “BingSearchSample”. Add a Search.aspx page to the project, once added the solution explorer will looks similar to the following. Now you need to add a web reference to the SOAP service available from Bing. To do this, from the solution explorer, right click your project, select Add Service Reference. Now the new service reference dialog will appear. In the left bottom of the dialog, you can find advanced button, click on it. Now the service reference settings dialog will appear. In the bottom left, you can find Add Web Reference button, click on it. The add web reference dialog will appear now. Enter the URL as http://api.bing.net/search.wsdl?AppID=<YourAppIDHere>&version=2.2 (replace <yourAppIDHere> with the appID you have generated previously) and click on the button next to it. This will find the web service methods available. You can change the namespace suggested by Bing, but for the purpose of this demonstration I have accepted all the default settings. Click on the Add reference button once you are done. Now the web reference to Search service will be added your project. You can find this under solution explorer of your project. Now in the Search.aspx, that you previously created, place one textbox, button and a grid view. For the purpose of this demonstration, I have given the identifiers (ID) as txtSearch, btnSearch, gvSearch respectively. The idea is to search the text entered in the text box using Bing service and show the results in the grid view. In the design view, the search.aspx looks as follows. In the search.aspx.cs page, add a using statement that points to net.bing.api. I have added the following code for button click event handler. The code is very straight forward. It just calls the service with your AppID, a query to search and a source for searching. Let us run this page and see the output when I enter Microsoft in my textbox. If you want to search a specific site, you can include the site name in the query parameter. For e.g. the following query will search the word Microsoft from www.microsoft.com website. searchRequest.Query = “site:www.microsoft.com Microsoft”; The output of this query is as follows. Integrating BING search API to your website is easy and there is no limit on the customization of the interface you can do. There is no Bing branding required so I believe this is a great option for web developers when they plan for site search.

    Read the article

  • Windows Workflow Foundation in .NET4

    Windows Workflow Foundation (WF4) in .NET 4 is designed to make it easier for new developers to learn, addresses a wider range of customer scenarios, and is more efficient.  WF is a programming model for composing application logic and coordinating execution, allowing developers to abstract complicated code while leveraging a set of runtime services.  Activities are the building blocks that are composed together to build workflows.  The runtime provides the ability to save the state...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Upgrading only several packages, ou packages from one source

    - by Cédric Girard
    we use Deb/apt system to deploy PHP softwares (around 200 + libraries with dependencies). We have a build server, scripts, and a private repository. It's ok and run fine but we want to update ours packages very often, and update Ubuntu packages only when our adminsys have time to handle them. How can we do? The only solution I see for now is to iterate on package list and do a apt-get install $packagename. Not very easy or even resilient. Another idea?

    Read the article

  • SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database

    - by Pinal Dave
    This is the third post in the series of the blog posts I am writing about NuoDB. NuoDB is very innovative and easy-to-use product. I can clearly see how one can scale-out NuoDB with so much ease and confidence. In my very first blog post we discussed how we can install NuoDB (link), and in my second post I discussed how we can manage the NuoDB database transaction engines and storage managers with a few clicks (link). Note: You can Download NuoDB from here. In this post, we will learn how we can use the Explorer feature of NuoDB to do various SQL operations. NuoDB has a browser-based Explorer, which is very powerful and has many of the features any IDE would normally have. Let us see how it works in the following step-by-step tutorial. Let us go to the NuoDBNuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the QuickStart screen. Make sure that you have created the sample database. If you have not created sample database, click on Create Database and create it successfully. Now go to the NuoDB Explorer by clicking on the main tab, and it will ask you for your domain username and password. Enter the username as a domain and password as a bird. Alternatively you can also enter username as a quickstart and password as a quickstart. Once you enter the password you will be able to see the databases. In our example we have installed the Sample Database hence you will see the Test database in our Database Hierarchy screen. When you click on database it will ask for the database login. Note that Database Login is different from Domain login and you will have to enter your database login over here. In our case the database username is dba and password is goalie. Once you enter a valid username and password it will display your database. Further expand your database and you will notice various objects in your database. Once you explore various objects, select any database and click on Open. When you click on execute, it will display the SQL script to select the data from the table. The autogenerated script displays entire result set from the database. The NuoDB Explorer is very powerful and makes the life of developers very easy. If you click on List SQL Statements it will list all the available SQL statements right away in Query Editor. You can see the popup window in following image. Here is the cool thing for geeks. You can even click on Query Plan and it will display the text based query plan as well. In case of a SELECT, the query plan will be much simpler, however, when we write complex queries it will be very interesting. We can use the query plan tab for performance tuning of the database. Here is another feature, when we click on List Tables in NuoDB Explorer.  It lists all the available tables in the query editor. This is very helpful when we are writing a long complex query. Here is a relatively complex example I have built using Inner Join syntax. Right below I have displayed the Query Plan. The query plan displays all the little details related to the query. Well, we just wrote multi-table query and executed it against the NuoDB database. You can use the NuoDB Admin section and do various analyses of the query and its performance. NuoDB is a distributed database built on a patented emergent architecture with full support for SQL and ACID guarantees.  It allows you to add Transaction Engine processes to a running system to improve the performance of your system.  You can also add a second Storage Engine to your running system for redundancy purposes.  Conversely, you can shut down processes when you don’t need the extra database resources. NuoDB also provides developers and administrators with a single intuitive interface for centrally monitoring deployments. If you have read my blog posts and have not tried out NuoDB, I strongly suggest that you download it today and catch up with the learnings with me. Trust me though the product is very powerful, it is extremely easy to learn and use. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • MacBook Pro Compatibility, Multitouch, and General Experience

    - by jondavidjohn
    I am a Ex-Ubuntu user and decided to go to OSX mainly because I was going to be working in an OSX shop and felt like I needed a more mainstream OS to run Production level software packages like Adobe. 6 months in, and I am more than happy with my MacBook Pro purchase. Just the physical build quality alone warrants the premium price tag, but I am now looking at my day to day demands and realize that I really do not use any software that prevents me from turning back to Ubuntu. My question now is, in terms of 2010 MacBook Pro, How is the hardware compatibility? Does the trackpad multitouch gesture work with 10.10? is it oversensitive? And for anyone that has a relatively new macbook pro that is running Ubuntu, How is the general experience coming from an OSX environment?

    Read the article

  • LINQ for SQL Developers and DBA’s

    - by AtulThakor
    Firstly I’d just like to thank the guys who organise the SQL Server User Group (Martin/Tony/Chris) and for giving me the opportunity to speak at the recent event. Sorry about the slides taking so long but here they are along with some extra information. Firstly the demo’s were all done using LINQPad 4.0 which can be downloaded here: http://www.linqpad.net/ There are 2 versions 3.5/4.0 With 3.5 you should be able to replicate the problem I showed where a query using a parameter which is X characters long would create a different execution plan to a query which uses a parameter which is Y characters long, otherwise I would just use 4.0 The sample database used is AdventureWorksLT2008 which can be downloaded from here: http://msftdbprodsamples.codeplex.com/releases/view/37109 The scripts have been named so that you can select the appropriate way to run them i.e.: C# expression / C#statement, each script can be run individually be highlighting the query and clicking the play symbol or hitting F5. Scripts and Slides: http://sqlblogcasts.com/blogs/atulthakor/An%20Introduction%20to%20LINQ.zip Please don't hesitate in sending any questions via email/twitter, I’ll try my best to answer your questions! Thanks, Atul

    Read the article

  • SQL Server DATA Tools CTP4 Released!

    - by hassanfadili
    SQL Server team has released the new SQL Server Data Tools CTP4. Congratulations and Thanks to Gert Drapers and his team with this great milestone. To lear more about this SSDT CTP4 Release, check: What’s new in SQL Server Data Tools CTP4?http://blogs.msdn.com/b/ssdt/archive/2011/11/21/what-s-new-in-sql-server-data-tools-ctp4.aspxSQL Server Data Tools CTP4 vs. VS2010 Database Projectshttp://blogs.msdn.com/b/ssdt/archive/2011/11/21/sql-server-data-tools-ctp4-vs-vs2010-database-projects.aspxTop VSDB->SSDT Project Conversion Issueshttp://blogs.msdn.com/b/ssdt/archive/2011/11/21/top-vsdb-gt-ssdt-project-conversion issues.aspxUninstalling SQL Server Developer Tools CTP3 (Code-named “Juneau”) http://blogs.msdn.com/b/ssdt/archive/2011/11/21/uninstalling-ssdt-ctp3-code-named-juneau.aspxThis actually points to a nifty PowerShell script to help you uninstall.Have Fun.v

    Read the article

  • Tiny DSLR Intervalometer Snaps Pics On User-Defined Schedule

    - by Jason Fitzpatrick
    If you’re interested in time-lapse photography but underwhelmed by the in-camera options (or lack there of) or don’t want to shell out money for an expensive commercial intervalometer, this DIY option is pretty slick solution. Achim Sack, a fan of hardware hacking and time lapse photography, created a super tiny interval timer that works with Nikon, Canon, and Pentax DSLRs. Plug it in, snap a shot between 0.4 seconds and 18 minutes to set the interval and then leave it be. As long as you have space on the memory card and power left in the battery the camera will keep snapping pictures. Hit up the link below to see his schematics, parts list, and more photos of the build. Interval Timer v2 [via Hack A Day] How To Encrypt Your Cloud-Based Drive with BoxcryptorHTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)

    Read the article

  • Mouse and Keyboard Freeze

    - by Kev
    I installed Ubuntu 10.10 today and have had mouse problem since. Symptoms: At some arbitrary point in time (frequency: 2-3 times per hour), the mouse and keyboard stops working for ever(may be). I start System monitor, I found out network was shutdown just before mouse freeze. Some time my keyboard keep typing one key. For example:77777777777777777777777777777777777777777777777777777.....(it keep typing for 20 sec) I found out a script just solve the freeze problem:(I hit Powerbutton) -----------------/etc/acpi/powerbtn.sh------------------------ event=button[ /]power action=/usr/sbin/fix_mouse.sh -----------------/usr/sbin/fix_mouse.sh------------------------ rmmod psmouse modprobe psmouse Yesterday I install Ubuntu 10.04 FAILED also have mouse problem. When I switch back to Windows XP. The network card is down. It kept connecting and disconnecting 1 time per sec. CPU: i5 Motherboard: ASUS P7P55D OS: Windows XP + Ubuntu 10.10 Video Card: ATI 5770 Mouse,Keyboard: PS/2

    Read the article

  • High CPU usage with Team Speak 3.0.0-rc2

    - by AlexTheBird
    The CPU usage is always around 40 percent. I use push-to-talk and I had uninstalled pulseaudio. Now I use Alsa. I don't even have to connect to a Server. By simply starting TS the cpu usage goes up 40 percent and stays there. The CPU usage of 3.0.0-rc1 [Build: 14468] is constantly 14 percent. This is the output of top, mpstat and ps aux while I am running TS3 ... of course: alexandros@alexandros-laptop:~$ top top - 18:20:07 up 2:22, 3 users, load average: 1.02, 0.85, 0.77 Tasks: 163 total, 1 running, 162 sleeping, 0 stopped, 0 zombie Cpu(s): 5.3%us, 1.9%sy, 0.1%ni, 91.8%id, 0.7%wa, 0.1%hi, 0.1%si, 0.0%st Mem: 2061344k total, 964028k used, 1097316k free, 69116k buffers Swap: 3997688k total, 0k used, 3997688k free, 449032k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2714 alexandr 20 0 206m 31m 24m S 37 1.6 0:12.78 ts3client_linux 868 root 20 0 47564 27m 10m S 8 1.4 3:21.73 Xorg 1 root 20 0 2804 1660 1204 S 0 0.1 0:00.53 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.01 migration/0 4 root 20 0 0 0 0 S 0 0.0 0:00.45 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 7 root 20 0 0 0 0 S 0 0.0 0:00.08 ksoftirqd/1 8 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 9 root 20 0 0 0 0 S 0 0.0 0:01.17 events/0 10 root 20 0 0 0 0 S 0 0.0 0:00.81 events/1 11 root 20 0 0 0 0 S 0 0.0 0:00.00 cpuset 12 root 20 0 0 0 0 S 0 0.0 0:00.00 khelper 13 root 20 0 0 0 0 S 0 0.0 0:00.00 async/mgr 14 root 20 0 0 0 0 S 0 0.0 0:00.00 pm 16 root 20 0 0 0 0 S 0 0.0 0:00.00 sync_supers 17 root 20 0 0 0 0 S 0 0.0 0:00.00 bdi-default 18 root 20 0 0 0 0 S 0 0.0 0:00.00 kintegrityd/0 19 root 20 0 0 0 0 S 0 0.0 0:00.00 kintegrityd/1 20 root 20 0 0 0 0 S 0 0.0 0:00.05 kblockd/0 21 root 20 0 0 0 0 S 0 0.0 0:00.02 kblockd/1 22 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpid 23 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpi_notify 24 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpi_hotplug 25 root 20 0 0 0 0 S 0 0.0 0:00.99 ata/0 26 root 20 0 0 0 0 S 0 0.0 0:00.92 ata/1 27 root 20 0 0 0 0 S 0 0.0 0:00.00 ata_aux 28 root 20 0 0 0 0 S 0 0.0 0:00.00 ksuspend_usbd 29 root 20 0 0 0 0 S 0 0.0 0:00.00 khubd alexandros@alexandros-laptop:~$ mpstat Linux 2.6.32-32-generic (alexandros-laptop) 16.06.2011 _i686_ (2 CPU) 18:20:15 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle 18:20:15 all 5,36 0,09 1,91 0,68 0,07 0,06 0,00 0,00 91,83 alexandros@alexandros-laptop:~$ ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 2804 1660 ? Ss 15:58 0:00 /sbin/init root 2 0.0 0.0 0 0 ? S 15:58 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S 15:58 0:00 [migration/0] root 4 0.0 0.0 0 0 ? S 15:58 0:00 [ksoftirqd/0] root 5 0.0 0.0 0 0 ? S 15:58 0:00 [watchdog/0] root 6 0.0 0.0 0 0 ? S 15:58 0:00 [migration/1] root 7 0.0 0.0 0 0 ? S 15:58 0:00 [ksoftirqd/1] root 8 0.0 0.0 0 0 ? S 15:58 0:00 [watchdog/1] root 9 0.0 0.0 0 0 ? S 15:58 0:01 [events/0] root 10 0.0 0.0 0 0 ? S 15:58 0:00 [events/1] root 11 0.0 0.0 0 0 ? S 15:58 0:00 [cpuset] root 12 0.0 0.0 0 0 ? S 15:58 0:00 [khelper] root 13 0.0 0.0 0 0 ? S 15:58 0:00 [async/mgr] root 14 0.0 0.0 0 0 ? S 15:58 0:00 [pm] root 16 0.0 0.0 0 0 ? S 15:58 0:00 [sync_supers] root 17 0.0 0.0 0 0 ? S 15:58 0:00 [bdi-default] root 18 0.0 0.0 0 0 ? S 15:58 0:00 [kintegrityd/0] root 19 0.0 0.0 0 0 ? S 15:58 0:00 [kintegrityd/1] root 20 0.0 0.0 0 0 ? S 15:58 0:00 [kblockd/0] root 21 0.0 0.0 0 0 ? S 15:58 0:00 [kblockd/1] root 22 0.0 0.0 0 0 ? S 15:58 0:00 [kacpid] root 23 0.0 0.0 0 0 ? S 15:58 0:00 [kacpi_notify] root 24 0.0 0.0 0 0 ? S 15:58 0:00 [kacpi_hotplug] root 25 0.0 0.0 0 0 ? S 15:58 0:00 [ata/0] root 26 0.0 0.0 0 0 ? S 15:58 0:00 [ata/1] root 27 0.0 0.0 0 0 ? S 15:58 0:00 [ata_aux] root 28 0.0 0.0 0 0 ? S 15:58 0:00 [ksuspend_usbd] root 29 0.0 0.0 0 0 ? S 15:58 0:00 [khubd] root 30 0.0 0.0 0 0 ? S 15:58 0:00 [kseriod] root 31 0.0 0.0 0 0 ? S 15:58 0:00 [kmmcd] root 34 0.0 0.0 0 0 ? S 15:58 0:00 [khungtaskd] root 35 0.0 0.0 0 0 ? S 15:58 0:00 [kswapd0] root 36 0.0 0.0 0 0 ? SN 15:58 0:00 [ksmd] root 37 0.0 0.0 0 0 ? S 15:58 0:00 [aio/0] root 38 0.0 0.0 0 0 ? S 15:58 0:00 [aio/1] root 39 0.0 0.0 0 0 ? S 15:58 0:00 [ecryptfs-kthrea] root 40 0.0 0.0 0 0 ? S 15:58 0:00 [crypto/0] root 41 0.0 0.0 0 0 ? S 15:58 0:00 [crypto/1] root 48 0.0 0.0 0 0 ? S 15:58 0:03 [scsi_eh_0] root 50 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_1] root 53 0.0 0.0 0 0 ? S 15:58 0:00 [kstriped] root 54 0.0 0.0 0 0 ? S 15:58 0:00 [kmpathd/0] root 55 0.0 0.0 0 0 ? S 15:58 0:00 [kmpathd/1] root 56 0.0 0.0 0 0 ? S 15:58 0:00 [kmpath_handlerd] root 57 0.0 0.0 0 0 ? S 15:58 0:00 [ksnapd] root 58 0.0 0.0 0 0 ? S 15:58 0:03 [kondemand/0] root 59 0.0 0.0 0 0 ? S 15:58 0:02 [kondemand/1] root 60 0.0 0.0 0 0 ? S 15:58 0:00 [kconservative/0] root 61 0.0 0.0 0 0 ? S 15:58 0:00 [kconservative/1] root 213 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_2] root 222 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_3] root 234 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_4] root 235 0.0 0.0 0 0 ? S 15:58 0:01 [usb-storage] root 255 0.0 0.0 0 0 ? S 15:58 0:00 [jbd2/sda5-8] root 256 0.0 0.0 0 0 ? S 15:58 0:00 [ext4-dio-unwrit] root 257 0.0 0.0 0 0 ? S 15:58 0:00 [ext4-dio-unwrit] root 290 0.0 0.0 0 0 ? S 15:58 0:00 [flush-8:0] root 318 0.0 0.0 2316 888 ? S 15:58 0:00 upstart-udev-bridge --daemon root 321 0.0 0.0 2616 1024 ? S<s 15:58 0:00 udevd --daemon root 526 0.0 0.0 0 0 ? S 15:58 0:00 [kpsmoused] root 528 0.0 0.0 0 0 ? S 15:58 0:00 [led_workqueue] root 650 0.0 0.0 0 0 ? S 15:58 0:00 [radeon/0] root 651 0.0 0.0 0 0 ? S 15:58 0:00 [radeon/1] root 652 0.0 0.0 0 0 ? S 15:58 0:00 [ttm_swap] root 654 0.0 0.0 2612 984 ? S< 15:58 0:00 udevd --daemon root 656 0.0 0.0 0 0 ? S 15:58 0:00 [hd-audio0] root 657 0.0 0.0 2612 916 ? S< 15:58 0:00 udevd --daemon root 674 0.6 0.0 0 0 ? S 15:58 0:57 [phy0] syslog 715 0.0 0.0 34812 1776 ? Sl 15:58 0:00 rsyslogd -c4 102 731 0.0 0.0 3236 1512 ? Ss 15:58 0:02 dbus-daemon --system --fork root 740 0.0 0.1 19088 3380 ? Ssl 15:58 0:00 gdm-binary root 744 0.0 0.1 18900 4032 ? Ssl 15:58 0:01 NetworkManager avahi 749 0.0 0.0 2928 1520 ? S 15:58 0:00 avahi-daemon: running [alexandros-laptop.local] avahi 752 0.0 0.0 2928 544 ? Ss 15:58 0:00 avahi-daemon: chroot helper root 753 0.0 0.1 4172 2300 ? S 15:58 0:00 /usr/sbin/modem-manager root 762 0.0 0.1 20584 3152 ? Sl 15:58 0:00 /usr/sbin/console-kit-daemon --no-daemon root 836 0.0 0.1 20856 3864 ? Sl 15:58 0:00 /usr/lib/gdm/gdm-simple-slave --display-id /org/gnome/DisplayManager/Display1 root 856 0.0 0.1 4836 2388 ? S 15:58 0:00 /sbin/wpa_supplicant -u -s root 868 2.3 1.3 36932 27924 tty7 Rs+ 15:58 3:22 /usr/bin/X :0 -nr -verbose -auth /var/run/gdm/auth-for-gdm-a46T4j/database -nolisten root 891 0.0 0.0 1792 564 tty4 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty4 root 901 0.0 0.0 1792 564 tty5 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty5 root 908 0.0 0.0 1792 564 tty2 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty2 root 910 0.0 0.0 1792 568 tty3 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty3 root 913 0.0 0.0 1792 564 tty6 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty6 root 917 0.0 0.0 2180 1072 ? Ss 15:58 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket daemon 924 0.0 0.0 2248 432 ? Ss 15:58 0:00 atd root 927 0.0 0.0 2376 900 ? Ss 15:58 0:00 cron root 950 0.0 0.0 11736 1372 ? Ss 15:58 0:00 /usr/sbin/winbindd root 958 0.0 0.0 11736 1184 ? S 15:58 0:00 /usr/sbin/winbindd root 974 0.0 0.1 6832 2580 ? Ss 15:58 0:00 /usr/sbin/cupsd -C /etc/cups/cupsd.conf root 1078 0.0 0.0 1792 564 tty1 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty1 gdm 1097 0.0 0.0 3392 772 ? S 15:58 0:00 /usr/bin/dbus-launch --exit-with-session root 1112 0.0 0.1 19216 3292 ? Sl 15:58 0:00 /usr/lib/gdm/gdm-session-worker root 1116 0.0 0.1 5540 2932 ? S 15:58 0:01 /usr/lib/upower/upowerd root 1131 0.0 0.1 6308 3824 ? S 15:58 0:00 /usr/lib/policykit-1/polkitd 108 1163 0.0 0.2 16788 4360 ? Ssl 15:58 0:01 /usr/sbin/hald root 1164 0.0 0.0 3536 1300 ? S 15:58 0:00 hald-runner root 1188 0.0 0.0 3612 1256 ? S 15:58 0:00 hald-addon-input: Listening on /dev/input/event6 /dev/input/event5 /dev/input/event2 root 1194 0.0 0.0 3612 1224 ? S 15:58 0:00 /usr/lib/hal/hald-addon-rfkill-killswitch root 1200 0.0 0.0 3608 1240 ? S 15:58 0:00 /usr/lib/hal/hald-addon-generic-backlight root 1202 0.0 0.0 3616 1236 ? S 15:58 0:02 hald-addon-storage: polling /dev/sr0 (every 2 sec) root 1204 0.0 0.0 3616 1236 ? S 15:58 0:00 hald-addon-storage: polling /dev/sdb (every 2 sec) root 1211 0.0 0.0 3624 1220 ? S 15:58 0:00 /usr/lib/hal/hald-addon-cpufreq 108 1212 0.0 0.0 3420 1200 ? S 15:58 0:00 hald-addon-acpi: listening on acpid socket /var/run/acpid.socket 1000 1222 0.0 0.1 24196 2816 ? Sl 15:58 0:00 /usr/bin/gnome-keyring-daemon --daemonize --login 1000 1240 0.0 0.3 28228 7312 ? Ssl 15:58 0:00 gnome-session 1000 1274 0.0 0.0 3284 356 ? Ss 15:58 0:00 /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-session gnome-session 1000 1277 0.0 0.0 3392 772 ? S 15:58 0:00 /usr/bin/dbus-launch --exit-with-session gnome-session 1000 1278 0.0 0.0 3160 1652 ? Ss 15:58 0:00 /bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session 1000 1281 0.0 0.2 8172 4636 ? S 15:58 0:00 /usr/lib/libgconf2-4/gconfd-2 1000 1287 0.0 0.5 24228 10896 ? Ss 15:58 0:03 /usr/lib/gnome-settings-daemon/gnome-settings-daemon 1000 1290 0.0 0.1 6468 2364 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd 1000 1293 0.0 0.6 38104 13004 ? S 15:58 0:03 metacity 1000 1296 0.0 0.1 30280 2628 ? Ssl 15:58 0:00 /usr/lib/gvfs//gvfs-fuse-daemon /home/alexandros/.gvfs 1000 1301 0.0 0.0 3344 988 ? S 15:58 0:03 syndaemon -i 0.5 -k 1000 1303 0.0 0.1 8060 3488 ? S 15:58 0:00 /usr/lib/gvfs/gvfs-gdu-volume-monitor root 1306 0.0 0.1 15692 3104 ? Sl 15:58 0:00 /usr/lib/udisks/udisks-daemon 1000 1307 0.4 1.0 50748 21684 ? S 15:58 0:34 python -u /usr/share/screenlets/DigiClock/DigiClockScreenlet.py 1000 1308 0.0 0.9 35608 18564 ? S 15:58 0:00 python /usr/share/screenlets-manager/screenlets-daemon.py 1000 1309 0.0 0.3 19524 6468 ? S 15:58 0:00 /usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1 1000 1311 0.0 0.5 37412 11788 ? S 15:58 0:01 gnome-power-manager 1000 1312 0.0 1.0 50772 22628 ? S 15:58 0:03 gnome-panel 1000 1313 0.1 1.5 102648 31184 ? Sl 15:58 0:10 nautilus root 1314 0.0 0.0 5188 996 ? S 15:58 0:02 udisks-daemon: polling /dev/sdb /dev/sr0 1000 1315 0.0 0.6 51948 12464 ? SL 15:58 0:01 nm-applet --sm-disable 1000 1317 0.0 0.1 16956 2364 ? Sl 15:58 0:00 /usr/lib/gvfs/gvfs-afc-volume-monitor 1000 1318 0.0 0.3 20164 7792 ? S 15:58 0:00 bluetooth-applet 1000 1321 0.0 0.1 7260 2384 ? S 15:58 0:00 /usr/lib/gvfs/gvfs-gphoto2-volume-monitor 1000 1323 0.0 0.5 37436 12124 ? S 15:58 0:00 /usr/lib/notify-osd/notify-osd 1000 1324 0.0 1.9 197928 40456 ? Ssl 15:58 0:06 /home/alexandros/.dropbox-dist/dropbox 1000 1329 0.0 0.3 20136 7968 ? S 15:58 0:00 /usr/bin/gnome-screensaver --no-daemon 1000 1331 0.0 0.1 7056 3112 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd-trash --spawner :1.6 /org/gtk/gvfs/exec_spaw/0 root 1340 0.0 0.0 2236 1008 ? S 15:58 0:00 /sbin/dhclient -d -sf /usr/lib/NetworkManager/nm-dhcp-client.action -pf /var/run/dhcl 1000 1348 0.0 0.1 42252 3680 ? Ssl 15:58 0:00 /usr/lib/bonobo-activation/bonobo-activation-server --ac-activate --ior-output-fd=19 1000 1384 0.0 1.7 80244 35480 ? Sl 15:58 0:02 /usr/bin/python /usr/lib/deskbar-applet/deskbar-applet/deskbar-applet --oaf-activate- 1000 1388 0.0 0.5 26196 11804 ? S 15:58 0:01 /usr/lib/gnome-panel/wnck-applet --oaf-activate-iid=OAFIID:GNOME_Wncklet_Factory --oa 1000 1393 0.1 0.5 25876 11548 ? S 15:58 0:08 /usr/lib/gnome-applets/multiload-applet-2 --oaf-activate-iid=OAFIID:GNOME_MultiLoadAp 1000 1394 0.0 0.5 25600 11140 ? S 15:58 0:03 /usr/lib/gnome-applets/cpufreq-applet --oaf-activate-iid=OAFIID:GNOME_CPUFreqApplet_F 1000 1415 0.0 0.5 39192 11156 ? S 15:58 0:01 /usr/lib/gnome-power-manager/gnome-inhibit-applet --oaf-activate-iid=OAFIID:GNOME_Inh 1000 1417 0.0 0.7 53544 15488 ? Sl 15:58 0:00 /usr/lib/gnome-applets/mixer_applet2 --oaf-activate-iid=OAFIID:GNOME_MixerApplet_Fact 1000 1419 0.0 0.4 23816 9068 ? S 15:58 0:00 /usr/lib/gnome-panel/notification-area-applet --oaf-activate-iid=OAFIID:GNOME_Notific 1000 1488 0.0 0.3 20964 7548 ? S 15:58 0:00 /usr/lib/gnome-disk-utility/gdu-notification-daemon 1000 1490 0.0 0.1 6608 2484 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd-burn --spawner :1.6 /org/gtk/gvfs/exec_spaw/1 1000 1510 0.0 0.1 6348 2084 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd-metadata 1000 1531 0.0 0.3 19472 6616 ? S 15:58 0:00 /usr/lib/gnome-user-share/gnome-user-share 1000 1535 0.0 0.4 77128 8392 ? Sl 15:58 0:00 /usr/lib/evolution/evolution-data-server-2.28 --oaf-activate-iid=OAFIID:GNOME_Evoluti 1000 1601 0.0 0.5 69576 11800 ? Sl 15:59 0:00 /usr/lib/evolution/2.28/evolution-alarm-notify 1000 1604 0.0 0.7 33924 15888 ? S 15:59 0:00 python /usr/share/system-config-printer/applet.py 1000 1701 0.0 0.5 37116 11968 ? S 15:59 0:00 update-notifier 1000 1892 4.5 7.0 406720 145312 ? Sl 17:11 3:09 /opt/google/chrome/chrome 1000 1896 0.0 0.1 69812 3680 ? S 17:11 0:02 /opt/google/chrome/chrome 1000 1898 0.0 0.6 91420 14080 ? S 17:11 0:00 /opt/google/chrome/chrome --type=zygote 1000 1916 0.2 1.3 140780 27220 ? Sl 17:11 0:12 /opt/google/chrome/chrome --type=extension --disable-client-side-phishing-detection - 1000 1918 0.7 1.8 155720 37912 ? Sl 17:11 0:31 /opt/google/chrome/chrome --type=extension --disable-client-side-phishing-detection - 1000 1921 0.0 1.0 135904 21052 ? Sl 17:11 0:02 /opt/google/chrome/chrome --type=extension --disable-client-side-phishing-detection - 1000 1927 6.5 3.6 194604 74960 ? Sl 17:11 4:32 /opt/google/chrome/chrome --type=renderer --disable-client-side-phishing-detection -- 1000 2156 0.4 0.7 48344 14896 ? Rl 18:03 0:04 gnome-terminal 1000 2157 0.0 0.0 1988 712 ? S 18:03 0:00 gnome-pty-helper 1000 2158 0.0 0.1 6504 3860 pts/0 Ss 18:03 0:00 bash 1000 2564 0.2 0.1 6624 3984 pts/1 Ss+ 18:17 0:00 bash 1000 2711 0.0 0.0 4208 1352 ? S 18:19 0:00 /bin/bash /home/alexandros/Programme/TeamSpeak3-Client-linux_x86_back/ts3client_runsc 1000 2714 36.5 1.5 210872 31960 ? SLl 18:19 0:18 ./ts3client_linux_x86 1000 2743 0.0 0.0 2716 1068 pts/0 R+ 18:20 0:00 ps aux Output of vmstat: alexandros@alexandros-laptop:~$ vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 1093324 69840 449496 0 0 27 10 476 667 6 2 91 1 Output of lsusb alexandros@alexandros-laptop:~$ lspci 00:00.0 Host bridge: Silicon Integrated Systems [SiS] 671MX 00:01.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge 00:02.0 ISA bridge: Silicon Integrated Systems [SiS] SiS968 [MuTIOL Media IO] (rev 01) 00:02.5 IDE interface: Silicon Integrated Systems [SiS] 5513 [IDE] (rev 01) 00:03.0 USB Controller: Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) 00:03.1 USB Controller: Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) 00:03.3 USB Controller: Silicon Integrated Systems [SiS] USB 2.0 Controller 00:05.0 IDE interface: Silicon Integrated Systems [SiS] SATA Controller / IDE mode (rev 03) 00:06.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge 00:07.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge 00:0d.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) 00:0f.0 Audio device: Silicon Integrated Systems [SiS] Azalia Audio Controller 01:00.0 VGA compatible controller: ATI Technologies Inc Mobility Radeon X2300 02:00.0 Ethernet controller: Atheros Communications Inc. AR5001 Wireless Network Adapter (rev 01) The Team Speak log file : 2011-06-19 19:04:04.223522|INFO | | | Logging started, clientlib version: 3.0.0-rc2 [Build: 14642] 2011-06-19 19:04:04.761149|ERROR |SoundBckndIntf| | /home/alexandros/Programme/TeamSpeak3-Client-linux_x86_back/soundbackends/libpulseaudio_linux_x86.so error: NOT_CONNECTED 2011-06-19 19:04:05.871770|INFO |ClientUI | | Failed to init text to speech engine 2011-06-19 19:04:05.894623|INFO |ClientUI | | TeamSpeak 3 client version: 3.0.0-rc2 [Build: 14642] 2011-06-19 19:04:05.895421|INFO |ClientUI | | Qt version: 4.7.2 2011-06-19 19:04:05.895571|INFO |ClientUI | | Using configuration location: /home/alexandros/.ts3client/ts3clientui_qt.conf 2011-06-19 19:04:06.559596|INFO |ClientUI | | Last update check was: Sa. Jun 18 00:08:43 2011 2011-06-19 19:04:06.560506|INFO | | | Checking for updates... 2011-06-19 19:04:07.357869|INFO | | | Update check, my version: 14642, latest version: 14642 2011-06-19 19:05:52.978481|INFO |PreProSpeex | 1| Speex version: 1.2rc1 2011-06-19 19:05:54.055347|INFO |UIHelpers | | setClientVolumeModifier: 10 -8 2011-06-19 19:05:54.057196|INFO |UIHelpers | | setClientVolumeModifier: 11 2 Thanks for taking the time to read my message. UPDATE: Thanks to nickguletskii's link I googled for "alsa cpu usage" (without quotes) and it brought me to a forum. A user wrote that by directly selecting the hardware with "plughw:x.x" won't impact the performance of the system. I have selected it in the TS 3 configuration and it worked. But this solution is not optimal because now no other program can access the sound output. If you need any further information or my question is unclear than please tell me.

    Read the article

  • Algorithmia Source Code released on CodePlex

    - by FransBouma
    Following the release of our BCL Extensions Library on CodePlex, we have now released the source-code of Algorithmia on CodePlex! Algorithmia is an algorithm and data-structures library for .NET 3.5 or higher and is one of the pillars LLBLGen Pro v3's designer is built on. The library contains many data-structures and algorithms, and the source-code is well documented and commented, often with links to official descriptions and papers of the algorithms and data-structures implemented. The source-code is shared using Mercurial on CodePlex and is licensed under the friendly BSD2 license. User documentation is not available at the moment but will be added soon. One of the main design goals of Algorithmia was to create a library which contains implementations of well-known algorithms which weren't already implemented in .NET itself. This way, more developers out there can enjoy the results of many years of what the field of Computer Science research has delivered. Some algorithms and datastructures are known in .NET but are re-implemented because the implementation in .NET isn't efficient for many situations or lacks features. An example is the linked list in .NET: it doesn't have an O(1) concat operation, as every node refers to the containing LinkedList object it's stored in. This is bad for algorithms which rely on O(1) concat operations, like the Fibonacci heap implementation in Algorithmia. Algorithmia therefore contains a linked list with an O(1) concat feature. The following functionality is available in Algorithmia: Command, Command management. This system is usable to build a fully undo/redo aware system by building your object graph using command-aware classes. The Command pattern is implemented using a system which allows transparent undo-redo and command grouping so you can use it to make a class undo/redo aware and set properties, use its contents without using commands at all. The Commands namespace is the namespace to start. Classes you'd want to look at are CommandifiedMember, CommandifiedList and KeyedCommandifiedList. See the CommandQueueTests in the test project for examples. Graphs, Graph algorithms. Algorithmia contains a sophisticated graph class hierarchy and algorithms implemented onto them: non-directed and directed graphs, as well as a subgraph view class, which can be used to create a view onto an existing graph class which can be self-maintaining. Algorithms include transitive closure, topological sorting and others. A feature rich depth-first search (DFS) crawler is available so DFS based algorithms can be implemented quickly. All graph classes are undo/redo aware, as they can be set to be 'commandified'. When a graph is 'commandified' it will do its housekeeping through commands, which makes it fully undo-redo aware, so you can remove, add and manipulate the graph and undo/redo the activity automatically without any extra code. If you define the properties of the class you set as the vertex type using CommandifiedMember, you can manipulate the properties of vertices and the graph contents with full undo/redo functionality without any extra code. Heaps. Heaps are data-structures which have the largest or smallest item stored in them always as the 'root'. Extracting the root from the heap makes the heap determine the next in line to be the 'maximum' or 'minimum' (max-heap vs. min-heap, all heaps in Algorithmia can do both). Algorithmia contains various heaps, among them an implementation of the Fibonacci heap, one of the most efficient heap datastructures known today, especially when you want to merge different instances into one. Priority queues. Priority queues are specializations of heaps. Algorithmia contains a couple of them. Sorting. What's an algorithm library without sort algorithms? Algorithmia implements a couple of sort algorithms which sort the data in-place. This aspect is important in situations where you want to sort the elements in a buffer/list/ICollection in-place, so all data stays in the data-structure it already is stored in. PropertyBag. It re-implements Tony Allowatt's original idea in .NET 3.5 specific syntax, which is to have a generic property bag and to be able to build an object in code at runtime which can be bound to a property grid for editing. This is handy for when you have data / settings stored in XML or other format, and want to create an editable form of it without creating many editors. IEditableObject/IDataErrorInfo implementations. It contains default implementations for IEditableObject and IDataErrorInfo (EditableObjectDataContainer for IEditableObject and ErrorContainer for IDataErrorInfo), which make it very easy to implement these interfaces (just a few lines of code) without having to worry about bookkeeping during databinding. They work seamlessly with CommandifiedMember as well, so your undo/redo aware code can use them out of the box. EventThrottler. It contains an event throttler, which can be used to filter out duplicate events in an event stream coming into an observer from an event. This can greatly enhance performance in your UI without needing to do anything other than hooking it up so it's placed between the event source and your real handler. If your UI is flooded with events from data-structures observed by your UI or a middle tier, you can use this class to filter out duplicates to avoid redundant updates to UI elements or to avoid having observers choke on many redundant events. Small, handy stuff. A MultiValueDictionary, which can store multiple unique values per key, instead of one with the default Dictionary, and is also merge-aware so you can merge two into one. A Pair class, to quickly group two elements together. Multiple interfaces for helping with building a de-coupled, observer based system, and some utility extension methods for the defined data-structures. We regularly update the library with new code. If you have ideas for new algorithms or want to share your contribution, feel free to discuss it on the project's Discussions page or send us a pull request. Enjoy!

    Read the article

  • SanjayP&rsquo;s venture after Microsoft involves no Microsoft

    - by eddraper
    When I was at Microsoft, I always found Sanjay Parthasarathy to be a bright and passionate leader.  While he was a bit disconnected at times with what was really going on out in the trenches, I always thought he was true believer in what we in Developer Platform and Evangelism (DPE) were doing.  He got it.  He had started DPE and kicked a lot of doors down up in Redmond to make it happen.  Back in the early 2000s, battles over platform choices at large customers was trench warfare… bayonets and hand grenades at the P-Code level.  This model was not at all suited to Microsoft’s org structure at the time.  While there were plenty of people fully able to have competitive conversations around Windows Server, or AD, or Exchange, or the desktop, there weren’t many that could have deep technical conversations around Java vs .NET and the platform “stack” as a cohesive, unified unit of value.  This task fell to DPE. Sanjay ended up leaving Microsoft a number of months before me in 2009 and I remember thinking these exact words: “holy shit, SanjayP left Microsoft.”  When SanjayP left DPE years before that,  Sheila Gulati had stepped into his shoes and I thought we where starting to miss a beat.  Sheila had built an amazing business at Microsoft India, but I don’t recall being inspired by her as a leader.  SanjayP’s talks felt like the opening scene of “Patton” with George C. Scott pacing in front of the American flag.  Sheila was a voice on a con-call.  When she moved on in 2007, Walid Abu-Hadba was given the reigns.  Personally, I don’t ever recall even seeing his face.  I think I might recall hearing his voice on some con-calls, but for all intents and purposes he was invisible to me.  Perhaps this was the beginning of my carelessness around seeking “visibility.” Fast forward to Build 2011.  First off, we have no PDC – we have Build.  Microsoft had made an 11 year investment by this time in building an organization to make its technology relevant to developers.  One would think such an org would be in the driver’s seat of such an event, but we see Windows product group people on the podiums.  Watching, I could see the messaging unfold… but no story.  It was like the old days.  Demos and PowerPoints by team members building the tech, and in many cases VPs.  The ensuing confusion is almost legendary now.  Windows 8 was, and is, a pretty big deal… but who is telling the story – not just features and benefits, but the story around how it all fits together. Having been out of Microsoft for two years now, and looking in, I can only conclude that the “DPE of old” has at best been emasculated, and at worst been completely marginalized by internal politics, or perhaps the eternal march of the corporate entropy generator that resides at all large companies.  I don’t think this is a good thing for anyone. And now, back to Sanjay who is the father of Microsoft DPE… I noticed that he has moved back to India and is doing start-up work.  His current company Indix looks to be doing some interesting things with “big data” and here’s their stack: Nary a trace of anything Microsoft.  What could account for this?  I wonder….  Better availability of labor and expertise in India for this stack?  Donno, but even in India, leet R and Hadoop skills have to be hard to find. Technical superiority?  This, I sincerely doubt. This stack, with SanjayP’s name as CEO leaves me with an unsettling feeling.  If he did believe, he no longer does.  One doesn’t place bets with real money on things they don’t believe in.  Perhaps he never did believe, and was a corporate creature seeking to find a niche for himself after which he manipulated me and others.  Or perhaps… anger… be it passive aggression or an outright “in your face F*** you” to his former masters. I guess in the end, only he knows the true reason… But I have my theory...

    Read the article

  • What does (Lua) game scripting mean?

    - by Gerenuk
    I've read that Lua is often used for embedded scripting and in particular game for scripting. I find it hard to picture how it is used exactly. Can you describe why and for which features and for which audience it is used? This questions isn't specifically addressing Lua, but rather any embedded scripting that serves a purpose similar to Lua scripting. Is it used for end-users to make custom adjustments? Is it used for game developers to speed up creation of game logic (levels, AI, ...)? Is it used to script game framework code since scripting can be faster? Basically I'm wondering how deep between plain configuration and framework logic such scripting usage goes. And how much scripting is done. A few configuration lines or a considerable amount?

    Read the article

  • What You Said: What’s Powering Your Media Center

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your media center setups, tips, and tricks. Now we’re back to share of the great comments you left. The range of techniques you all use for getting access to your media is impressive. Some readers had setups as simple as tamasksz’s setup: WDTV Live with an external HDD… So simple, but works. Others started with simple setups, like a WDTV Live, and worked their way up, like Dave: How to Own Your Own Website (Even If You Can’t Build One) Pt 1 What’s the Difference Between Sleep and Hibernate in Windows? Screenshot Tour: XBMC 11 Eden Rocks Improved iOS Support, AirPlay, and Even a Custom XBMC OS

    Read the article

  • MVC 2 Presentation &ndash; Final Demo

    - by Steve Michelotti
    In my presentation this past weekend at NoVa Code Camp, a member of the audience caught my final demo on video. In this demo, I combine multiple new features from MVC 2 to show how to build apps quickly with MVC 2. These features include: Template Helpers / Editor Templates Server-side/Client-side Validation Model Metadata for View Model HTML Encoding Syntax Dependency Injection Abstract Controllers Custom T4 Templates Custom MVC Visual Studio 2010 Code Snippets The projector screen is a little difficult to see during parts of the video – a video of the direct screencast can be seen here: MVC 2 in 2 Minutes.   Direct link to YouTube video here.

    Read the article

  • Wrong encoding in DataReceivedEventArgs

    - by user2102508
    I start cmd.exe process and redirect stdin to pass script to it and redirect stdout and stderr to read cmd's output. Here is the code of my DataReceivedEventHandler: (o, a) => { if(!String.IsNullOrEmpty(a.Data)) { bw.Write(a.Data.ToUTF8()); bw.Write((byte)'\n'); } } In the code bw is instance of BinaryWriter, ToUTF8 is string extension method, that converts a string to UTF8 encoded byte array. When I use this code in a separate process it works well, however when I use this code as a shared library inside some other process a.Data doesn't contain valid localized characters (like russian characters for example). So how should I convert characters? How to get cmd's OEM encoding? Why does the code works well in a separate process and doesn't work as a shared library inside some other process?

    Read the article

  • How to Search Just the Site You’re Viewing Using Google Search

    - by The Geek
    Have you ever wanted to search the site you’re viewing, but the built-in search box is either hard to find, or doesn’t work very well? Here’s how to add a special keyword bookmark that searches the site you’re viewing using Google’s site: search operator. This technique should work in either Google Chrome or Firefox—in Firefox you’ll want to create a regular bookmark and add the script into the keyword field, and for Google Chrome just follow the steps we’ve provided below Latest Features How-To Geek ETC How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know HTG Explains: Which Linux File System Should You Choose? HTG Explains: Why Does Photo Paper Improve Print Quality? Simon’s Cat Explores the Christmas Tree! [Video] The Outdoor Lights Scene from National Lampoon’s Christmas Vacation [Video] The Famous Home Alone Pizza Delivery Scene [Classic Video] Chronicles of Narnia: The Voyage of the Dawn Treader Theme for Windows 7 Cardinal and Rabbit Sharing a Tree on a Cold Winter Morning Wallpaper An Alternate Star Wars Christmas Special [Video]

    Read the article

  • Browsing Your ADF Application Module Pooling Params with WLST

    - by Duncan Mills
    In ADF 11g you can of course use Enterprise Manager (EM) to browse and configure the settings used by ADF Business Components  Application Modules, as shown here for one of my sample deployed applications. This screen you can access from the EM homepage by pulling down the Application Deployment menu, and then ADF > Configure ADF Business Components. Then select the profile that you are actually using (Hint: look in the DataBindings.cpx file to work this out - probably the "Local" version unless you've explicitly changed it. )So, from this screen you can change the pooling parameters and the world is good. But what if you don't have EM installed? In that case you can use the WebLogic scripting capabilities to view (and Update) the MBean Properties. Explanation The pooling parameters and many others are handled through Message Driven Beans that are created for the deployed application in the server. In the case of the ADF BC pooling parameters, this MBean will combine the configuration deployed as part of the application, along with any overrides defined as -D environement commands on the JVM startup for the application server instance. Using WLST to Browse the Bean ValuesFor our purposes here I'm doing this interactively, although you can also write a script or write Java to achieve the same thing.Step 0: Before You Start You will need the followingAccess to the console on the machine that is running the serverThe WebLogic Admin username and password (I'll use weblogic/password as my example here - yours will be different)The name of the deployed application (in this example FMWdh_application1)The package path to the bc4j.xcfg file (in this example oracle.demo.fmwdh.model.service.common.bc4j.xcfg) This is based on the default path for your model project so it shoudl be fairly easy to work out.The BC configuration your AM is actually running with (look in the DataBindings.cpx for that. In this example DealHelpServiceDeployed is the profile being used..)Step 1: Start the WLST consoleTo start at the beginning, you need to run the WLST command but that needs a little setup:Change to the wlserver_10.3/server/bin directory e.g. under your Fusion Middleware Home[oracle@mymachine] cd /home/oracle/FMW_R1/wlserver_10.3/server/binSet your environment using the setWLSEnv script. e.g. on Oracle Enterprise Linux:[oracle@mymachine bin] source setWLSEnv.shStart the WLST interactive console[oracle@mymachine bin] java weblogic.WLSTInitializing WebLogic Scripting Tool (WLST) ...Welcome to WebLogic Server Administration Scripting ShellType help() for help on available commandswls:/offline> Step 2:Enter the WLST commandsConnect to the server wls:> connect('weblogic','password')Change to the Custom root, this is where the AMPooling MBeans are registered wls:> custom()Change to the b4j MBean directorywls:> cd ('oracle.bc4j.mbean.config')Work out the correct directory for the AM configuration you need. This is the difficult bit, not because it's hard to do, but because the names are long. The structure here is such that every child MBean is displayed at the same level as the parent, so for each deployed application there will be many directories shown. In fact, do an ls() command here and you'll see what I mean. Each application will have one MBean for the app as a whole, and then for each deployed configuration in the .xcfg file you'll see: One for the config entry itself, and then one each for Security, DB Connection and AM Pooling. So if you deploy an app with just one configuration you'll see 5 directories, if it has two configurations in the .xcfg you'll see 9 and so on.The directory you are looking for will contain those bits of information you gathered in Step 0, specifically the Application Name, the configuration you are using and the xcfg name: First of all narrow your list to just those directories returned from the ls() command that begin oracle.bc4j.mbean.config:name=AMPool. These identify the AM pooling MBeans for all the deployed applications. Now look for the correct application name e.g. Application=FMWdh_application1The config setting in that sub-list should already be correct and match what you expect e.g. oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfgFinally look for the correct value for the AppModuleConfigType e.g. oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployedNow you have identified the correct directory name, change to that (keep the name on one line of course - I've had to split it across lines here for clarity:wls:> cd ('oracle.bc4j.mbean.config:name=AMPool,     type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,    oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,    Application=FMWdh_application1,    oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed') Now you can actually view the parameter values with a simple ls() commandwls:> ls()And here's the output in which you can view the realtime values of the various pool settings: -rw- AmpoolConnectionstrategyclass oracle.jbo.common.ampool.DefaultConnectionStrategy -rw- AmpoolDoampooling true -rw- AmpoolDynamicjdbccredentials false -rw- AmpoolInitpoolsize 2 -rw- AmpoolIsuseexclusive true -rw- AmpoolMaxavailablesize 40 -rw- AmpoolMaxinactiveage 600000 -rw- AmpoolMaxpoolsize 4096 -rw- AmpoolMinavailablesize 2 -rw- AmpoolMonitorsleepinterval 600000 -rw- AmpoolResetnontransactionalstate true -rw- AmpoolSessioncookiefactoryclass oracle.jbo.common.ampool.DefaultSessionCookieFactory -rw- AmpoolTimetolive 3600000 -rw- AmpoolWritecookietoclient false -r-- ConfigMBean true -rw- ConnectionPoolManager oracle.jbo.server.ConnectionPoolManagerImpl -rw- Doconnectionpooling false -rw- Dofailover false -rw- Initpoolsize 0 -rw- Maxpoolcookieage -1 -rw- Maxpoolsize 4096 -rw- Poolmaxavailablesize 25 -rw- Poolmaxinactiveage 600000 -rw- Poolminavailablesize 5 -rw- Poolmonitorsleepinterval 600000 -rw- Poolrequesttimeout 30000 -rw- Pooltimetolive -1 -r-- ReadOnly false -rw- Recyclethreshold 10 -r-- RestartNeeded false -r-- SystemMBean false -r-- eventProvider true -r-- eventTypes java.lang.String[jmx.attribute.change] -r-- objectName oracle.bc4j.mbean.config:name=AMPool,type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,Application=FMWdh_application1,oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed -rw- poolClassName oracle.jbo.common.ampool.ApplicationPoolImpl Thanks to Brian Fry on the JDeveloper PM Team who did most of the work to put this sequence of steps together with me badgering him over his shoulder.

    Read the article

< Previous Page | 733 734 735 736 737 738 739 740 741 742 743 744  | Next Page >