Search Results

Search found 18702 results on 749 pages for 'digital input'.

Page 686/749 | < Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >

  • VirtualBox 3.2 is released! A Red Letter Day?

    - by Fat Bloke
    Big news today! A new release of VirtualBox packed full of innovation and improvements. Over the next few weeks we'll take a closer look at some of these new features in a lot more depth, but today we'll whet your appetite with the headline descriptions. To start with, we should point out that this is the first Oracle-branded version which makes today a real Red-letter day ;-)  Oracle VM VirtualBox 3.2 Version 3.2 moves VirtualBox forward in 3 main areas ( handily, all beginning with "P" ) : performance, power and supported guest operating system platforms.  Let's take a look: Performance New Latest Intel hardware support - Harnessing the latest in chip-level support for virtualization, VirtualBox 3.2 supports new Intel Core i5 and i7 processor and Intel Xeon processor 5600 Series support for Unrestricted Guest Execution bringing faster boot times for everything from Windows to Solaris guests; New Large Page support - Reducing the size and overhead of key system resources, Large Page support delivers increased performance by enabling faster lookups and shorter table creation times. New In-hypervisor Networking - Significant optimization of the networking subsystem has reduced context switching between guests and host, increasing network throughput by up to 25%. New New Storage I/O subsystem - VirtualBox 3.2 offers a completely re-worked virtual disk subsystem which utilizes asynchronous I/O to achieve high-performance whilst maintaining high data integrity; New Remote Video Acceleration - The unique built-in VirtualBox Remote Display Protocol (VRDP), which is primarily used in virtual desktop infrastructure deployments, has been enhanced to deliver video acceleration. This delivers a rich user experience coupled with reduced computational expense, which is vital when servers are running hundreds of virtual machines; Power New Page Fusion - Traditional Page Sharing techniques have suffered from long and expensive cache construction as pages are scrutinized as candidates for de-duplication. Taking a smarter approach, VirtualBox Page Fusion uses intelligence in the guest virtual machine to determine much more rapidly and accurately those pages which can be eliminated thereby increasing the capacity or vm density of the system; New Memory Ballooning- Ballooning provides another method to increase vm density by allowing the memory of one guest to be recouped and made available to others; New Multiple Virtual Monitors - VirtualBox 3.2 now supports multi-headed virtual machines with up to 8 virtual monitors attached to a guest. Each virtual monitor can be a host window, or be mapped to the hosts physical monitors; New Hot-plug CPU's - Modern operating systems such Windows Server 2008 x64 Data Center Edition or the latest Linux server platforms allow CPUs to be dynamically inserted into a system to provide incremental computing power while the system is running. Version 3.2 introduces support for Hot-plug vCPUs, allowing VirtualBox virtual machines to be given more power, with zero-downtime of the guest; New Virtual SAS Controller - VirtualBox 3.2 now offers a virtual SAS controller, enabling it to run the most demanding of high-end guests; New Online Snapshot Merging - Snapshots are powerful but can eat up disk space and need to be pruned from time to time. Historically, machines have needed to be turned off to delete or merge snapshots but with VirtualBox 3.2 this operation can be done whilst the machines are running. This allows sophisticated system management with minimal interruption of operations; New OVF Enhancements - VirtualBox has supported the OVF standard for virtual machine portability for some time. Now with 3.2, VirtualBox specific configuration data is also stored in the standard allowing richer virtual machine definitions without compromising portability; New Guest Automation - The Guest Automation APIs allow host-based logic to drive operations in the guest; Platforms New USB Keyboard and Mouse - Support more guests that require USB input devices; New Oracle Enterprise Linux 5.5 - Support for the latest version of Oracle's flagship Linux platform; New Ubuntu 10.04 ("Lucid Lynx") - Support for both the desktop and server version of the popular Ubuntu Linux distribution; And as a man once said, "just one more thing" ... New Mac OS X (experimental) - On Apple hardware only, support for creating virtual machines run Mac OS X. All in all this is a pretty powerful release packed full of innovation and speedups. So what are you waiting for?  -FB 

    Read the article

  • Node.js Lockstep Multiplayer Architecture

    - by Wakaka
    Background I'm using the lockstep model for a multiplayer Node.js/Socket.IO game in a client-server architecture. User input (mouse or keypress) is parsed into commands like 'attack' and 'move' on the client, which are sent to the server and scheduled to be executed on a certain tick. This is in contrast to sending state data to clients, which I don't wish to use due to bandwidth issues. Each tick, the server will send the list of commands on that tick (possibly empty) to each client. The server and all clients will then process the commands and simulate that tick in exactly the same way. With Node.js this is actually quite simple due to possibility of code sharing between server and client. I'll just put the deterministic simulator in the /shared folder which can be run by both server and client. The server simulation is required so that there is an authoritative version of the simulation which clients cannot alter. Problem Now, the game has many entity classes, like Unit, Item, Tree etc. Entities are created in the simulator. However, for each class, it has some methods that are shared and some that are client-specific. For instance, the Unit class has addHp method which is shared. It also has methods like getSprite (gets the image of the entity), isVisible (checks if unit can be seen by the client), onDeathInClient (does a bunch of stuff when it dies only on the client like adding announcements) and isMyUnit (quick function to check if the client owns the unit). Up till now, I have been piling all the client functions into the shared Unit class, and adding a this.game.isServer() check when necessary. For instance, when the unit dies, it will call if (!this.game.isServer()) { this.onDeathInClient(); }. This approach has worked pretty fine so far, in terms of functionality. But as the codebase grew bigger, this style of coding seems a little strange. Firstly, the client code is clearly not shared, and yet is placed under the /shared folder. Secondly, client-specific variables for each entity are also instantiated on the server entity (like unit.sprite) and can run into problems when the server cannot instantiate the variable (it doesn't have Image class like on browsers). So my question is, is there a better way to organize the client code, or is this a common way of doing things for lockstep multiplayer games? I can think of a possible workaround, but it does have its own problems. Possible workaround (with problems) I could use Javascript mixins that are only added when in a browser. Thus, in the /shared/unit.js file in the /shared folder, I would have this code at the end: if (typeof exports !== 'undefined') module.exports = Unit; else mixin(Unit, LocalUnit); Then I would have /client/localunit.js store an object LocalUnit of client-side methods for Unit. Now, I already have a publish-subscribe system in place for events in the simulator. To remove the this.game.isServer() checks, I could publish entity-specific events whenever I want the client to do something. For instance, I would do this.publish('Death') in /shared/unit.js and do this.subscribe('Death', this.onDeathInClient) in /client/localunit.js. But this would make the simulator's event listeners list on the server and the client different. Now if I want to clear all subscribed events only from the shared simulator, I can't. Of course, it is possible to create two event subscription systems - one client-specific and one shared - but now the publish() method would have to do if (!this.game.isServer()) { this.publishOnClient(event); }. All in all, the workaround off the top of my head seems pretty complicated for something as simple as separating the client and shared code. Thus, I wonder if there is an established and simpler method for better code organization, hopefully specific to Node.js games.

    Read the article

  • How can I fix my keyboard layout?

    - by Scott Severance
    For a long time, I've had my keyboard configured to use the layout currently known as "English (international AltGr dead keys)." I like this layout because without any modifier keys, it's identical to the US English keyboard, but when I hold Right Alt I can get accented letters and other characters not available on a standard US English keyboard. In Oneiric, however, the layout is messed up. Right Alt+N produces "ñ" as expected. And another method works: Right Alt+`, E produces "è", also as expected. But there's no way to type "é", which is probably the accented letter I type the most. I expect Right Alt+A, E to do the trick. But instead of a dead key for the acute accent, it uses a method for combining characters to create the hybrid "´e". This hybrid looks like the proper "é" in some settings, but it isn't the same character and doesn't always work. (For example, in the text input box as I type this, it looks the same as the proper character, but when displayed on the site for all so see, it looks very wrong--at least on my machine.) Ditto for all other characters with an acute accent, though some are available directly as pre-composed characters: For example, Right Alt+I yields "í". How can I change the acute accent on the A key to a proper dead key? Perhaps the more general version of this is: How can I tweak my keyboard layout? Update I just tested this on my other machine, also running Oneiric, but upgraded from previous versions. I have no problems with the second machine. The problem machine was a fresh install of Oneiric, but I kept my old $HOME when I did the fresh install. Clarification Even if an answer doesn't address my specific examples, I would still accept it if it provided enough detail for me to find the layout and tweak it according to my needs. Major Update After working through the information gained through Jim C's and Chascon's helpful replies, I've learned something new: The problem isn't with the layout itself, but with the fact that the selected layout isn't being applied. When I look at the definition in /usr/share/X11/xkb/symbols/us of the layout I've been running for a long time, I found that the definition doesn't match what I get when I type. In addition, the keyboard layout dialog that's supposed to show the current layout looks different from the way the layout is defined in the file I mentioned, and matches what actually happens when I type. Following Jim C's suggestion, I created a new layout in /usr/share/X11/xkb/symbols/us containing some modifications to the layout I want. I can select my layout from the keyboard properties, and I can use in on the console following Chascon's post, but the layout I get when typing is unchanged. Apparently, there's a different layout defined somewhere that's overriding what I've set. Where is that layout hiding? This problem occurs in Unity (3D and 2D), but I was able to get the correct layout set in Xfce. In case it's relevant, this problem has occurred since I installed Oneiric fresh on this machine (though I preserved my $HOME). I don't recall whether this problem occurred before the reinstall. Also, in case it's relevant, I also run iBus so I can type Korean. I have a few difficulties with iBus, but I doubt they're related.

    Read the article

  • database design help for game / user levels / progress

    - by sprugman
    Sorry this got long and all prose-y. I'm creating my first truly gamified web app and could use some help thinking about how to structure the data. The Set-up Users need to accomplish tasks in each of several categories before they can move up a level. I've got my Users, Tasks, and Categories tables, and a UserTasks table which joins the three. ("User 3 has added Task 42 in Category 8. Now they've completed it.") That's all fine and working wonderfully. The Challenge I'm not sure of the best way to track the progress in the individual categories toward each level. The "business" rules are: You have to achieve a certain number of points in each category to move up. If you get the number of points needed in Cat 8, but still have other work to do to complete the level, any new Cat 8 points count toward your overall score, but don't "roll over" into the next level. The number of Categories is small (five currently) and unlikely to change often, but by no means absolutely fixed. The number of points needed to level-up will vary per level, probably by a formula, or perhaps a lookup table. So the challenge is to track each user's progress toward the next level in each category. I've thought of a few potential approaches: Possible Solutions Add a column to the users table for each category and reset them all to zero each time a user levels-up. Have a separate UserProgress table with a row for each category for each user and the number of points they have. (Basically a Many-to-Many version of #1.) Add a userLevel column to the UserTasks table and use that to derive their progress with some kind of SUM statement. Their current level will be a simple int in the User table. Pros & Cons (1) seems like by far the most straightforward, but it's also the least flexible. Perhaps I could use a naming convention based on the category ids to help overcome some of that. (With code like "select cats; for each cat, get the value from Users.progress_{cat.id}.") It's also the one where I lose the most data -- I won't know which points counted toward leveling up. I don't have a need in mind for that, so maybe I don't care about that. (2) seems complicated: every time I add or subtract a user or a category, I have to maintain the other table. I foresee synchronization challenges. (3) Is somewhere in between -- cleaner than #2, but less intuitive than #1. In order to find out where a user is, I'd have mildly complex SQL like: SELECT categoryId, SUM(points) from UserTasks WHERE userId={user.id} & countsTowardLevel={user.level} groupBy categoryId Hmm... that doesn't seem so bad. I think I'm talking myself into #3 here, but would love any input, advice or other ideas. P.S. Sorry for the cross-post. I wrote this up on SO and then remembered that there was a game dev-focused one. Curious to see if I get different answers one place than the other....

    Read the article

  • Using T4 to generate Configuration classes

    - by Justin Hoffman
    I wanted to try to use T4 to read a web.config and generate all of the appSettings and connectionStrings as properties of a class.  I elected in this template only to output appSettings and connectionStrings but you can see it would be easily adapted for app specific settings, bindings etc.  This allows for quick access to config values as well as removing the potential for typo's when accessing values from the ConfigurationManager. One caveat: a developer would need to remember to run the .tt file after adding an entry to the web.config.  However, one would quickly notice when trying to access the property from the generated class (it wouldn't be there).  Additionally, there are other options as noted here. The first step was to create the .tt file.  Note that this is a basic example, it could be extended even further I'm sure.  In this example I just manually input the path to the web.config file. <#@ template debug="false" hostspecific="true" language="C#" #><#@ output extension=".cs" #><#@ assembly Name="System.Configuration" #><#@ assembly name="System.Xml" #><#@ assembly name="System.Xml.Linq" #><#@ assembly name="System.Net" #><#@ assembly name="System" #><#@ import namespace="System.Configuration" #><#@ import namespace="System.Xml" #><#@ import namespace="System.Net" #><#@ import namespace="Microsoft.VisualStudio.TextTemplating" #><#@ import namespace="System.Xml.Linq" #>using System;using System.Configuration;using System.Xml;using System.Xml.Linq;using System.Linq;namespace MyProject.Web { public partial class Configurator { <# var xDocument = XDocument.Load(@"G:\MySolution\MyProject\Web.config"); var results = xDocument.Descendants("appSettings"); const string key = "key"; const string name = "name"; foreach (var xElement in results.Descendants()) {#> public string <#= xElement.Attribute(key).Value#>{get {return ConfigurationManager.AppSettings[<#= string.Format("{0}{1}{2}","\"" , xElement.Attribute(key).Value, "\"")#>];}} <#}#> <# var connectionStrings = xDocument.Descendants("connectionStrings"); foreach(var connString in connectionStrings.Descendants()) {#> public string <#= connString.Attribute(name).Value#>{get {return ConfigurationManager.ConnectionStrings[<#= string.Format("{0}{1}{2}","\"" , connString.Attribute(name).Value, "\"")#>].ConnectionString;}} <#} #> }} The resulting .cs file: using System;using System.Configuration;using System.Xml;using System.Xml.Linq;using System.Linq;namespace MyProject.Web { public partial class Configurator { public string ClientValidationEnabled{get {return ConfigurationManager.AppSettings["ClientValidationEnabled"];}} public string UnobtrusiveJavaScriptEnabled{get {return ConfigurationManager.AppSettings["UnobtrusiveJavaScriptEnabled"];}} public string ServiceUri{get {return ConfigurationManager.AppSettings["ServiceUri"];}} public string TestConnection{get {return ConfigurationManager.ConnectionStrings["TestConnection"].ConnectionString;}} public string SecondTestConnection{get {return ConfigurationManager.ConnectionStrings["SecondTestConnection"].ConnectionString;}} }} Next, I extended the partial class for easy access to the Configuration. However, you could just use the generated class file itself. using System;using System.Linq;using System.Xml.Linq;namespace MyProject.Web{ public partial class Configurator { private static readonly Configurator Instance = new Configurator(); public static Configurator For { get { return Instance; } } }} Finally, in my example, I used the Configurator class like so: [TestMethod] public void Test_Web_Config() { var result = Configurator.For.ServiceUri; Assert.AreEqual(result, "http://localhost:30237/Service1/"); }

    Read the article

  • SQL SERVER – How to Get SQL Server Restart Notification?

    - by Pinal Dave
    Few days back my friend called me to know if there is any tool which can be used to get restart notification about SQL in their environment. I told that SQL Server can do it by itself with some configurations. He was happy and surprised to know that he need not spend any extra money. In SQL Server, we can configure stored procedure(s) to run at start-up of SQL Server. This blog would give steps to achieve how to achieve it. There are many situations where this feature can be used. Below are few. Logging SQL Server startup timings Modify data in some table during startup (i.e. table in tempdb) Sending notification about SQL start. Step 1 – Enable ‘scan for startup procs’ This can be done either using T-SQL or User Interface of Management Studio. EXEC sys.sp_configure N'Show Advanced Options', N'1' GO RECONFIGURE WITH OVERRIDE GO EXEC sys.sp_configure N'scan for startup procs', N'1' GO RECONFIGURE WITH OVERRIDE GO Below is the interface to change the setting. We need to go to “Server” > “Properties” and use “Advanced” tab. “Scan for Startup Procs” is the parameter under “Miscellaneous” section as shown below. We need to make value as “True” and hit OK. Step 2 – Create stored procedure It’s important to note that the procedure is executed after recovery is finished for ALL databases. Here is a sample stored procedure. You can use your own logic in the procedure. CREATE PROCEDURE SQLStartupProc AS BEGIN CREATE TABLE ##ThisTableShouldAlwaysExists (AnyColumn INT) END Step 3 – Set Procedure to run at startup We need to use sp_procoption to mark the procedure to run at startup. Here is the code to let SQL know that this is startup proc. sp_procoption 'SQLStartupProc', 'startup', 'true' This can be used only for procedures in master database. Msg 15398, Level 11, State 1, Procedure sp_procoption, Line 89 Only objects in the master database owned by dbo can have the startup setting changed. We also need to remember that such procedure should not have any input/output parameter. Here is the error which would be raised. Msg 15399, Level 11, State 1, Procedure sp_procoption, Line 107 Could not change startup option because this option is restricted to objects that have no parameters. Verification Here is the query to find which procedures is marked as startup procedures. SELECT name FROM sys.objects WHERE OBJECTPROPERTY(OBJECT_ID, 'ExecIsStartup') = 1 Once this is done, I have restarted SQL instance and here is what we would see in SQL ERRORLOG Launched startup procedure 'SQLStartupProc'. This confirms that stored procedure is executed. You can also notice that this is done after all databases are recovered. Recovery is complete. This is an informational message only. No user action is required. After few days my friend again called me and asked – I want to turn this OFF? Use comments section and post the answer for him.  Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL

    Read the article

  • apt-get upgrade stuck at the same package

    - by decibyte
    Current status I've started to suspect this is not an Ubuntu issue, but related to the internet connection here at my work. Until I'm sure, Im leaving my question below: Original question I'm stuck, can't upgrade my system. Running sudo apt-get upgrade gives me the following: mmm@alalunga:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: ginn libgrip0 linux-generic-pae linux-headers-generic-pae linux-image-generic-pae The following packages will be upgraded: apport apport-gtk bind9-host build-essential dhcp3-client dhcp3-common dnsutils eog evince evince-common firefox firefox-branding firefox-dbg firefox-globalmenu firefox-gnome-support firefox-locale-en gimp gimp-data gir1.2-totem-1.0 glib-networking glib-networking-common glib-networking-services gnupg gpgv icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-6-plugin icedtea-netx icedtea-netx-common icedtea-plugin isc-dhcp-client isc-dhcp-common libapache2-mod-php5 libart-2.0-2 libbind9-80 libdns81 libevince3-3 libgimp2.0 libisc83 libisccc80 libisccfg82 liblwres80 libssl-dev libssl-doc libssl1.0.0 libtotem0 linux-firmware linux-libc-dev openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openssl php-pear php5-cli php5-common php5-curl php5-dev php5-gd php5-mysql php5-xsl policykit-1-gnome python-apport python-django python-gst0.10 python-problem-report resolvconf thunderbird thunderbird-globalmenu thunderbird-gnome-support totem totem-common totem-mozilla totem-plugins xserver-xorg-input-synaptics 74 upgraded, 0 newly installed, 0 to remove and 5 not upgraded. Need to get 317 MB/327 MB of archives. After this operation, 1.481 kB of additional disk space will be used. Do you want to continue [Y/n]? Get:1 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:2 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:3 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:4 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:5 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:6 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:7 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] 9% [7 openjdk-6-jre-headless 27,3 MB/27,3 MB 100%] It keeps downloading the package openjdk-6-jre-headless, then does nothing for a while (hanging on what's the last line above), then download the package again. It's at its 13th download attempt at the moment of writing. The actual downloads seem to be done just fine, but whatever it does after downloading seems to be failing. I tried removing openjdk-6, but then it wanted to install openjdk-7 instead, with the same result, hanging at openjdk-7-jre-headless instead. I also tried changing servers from my local (Danish) to the main server. No luck. It's also keeping me from upgrading alle the other packages. What to do? Update After following instructions in the answer by @lpanebr, it is now stuck at the linux-firmware package. So, maybe it's a more general problem than being related to specific package(s)? Although it did download some packages without problems before getting stuck at linux-firmware.

    Read the article

  • Solving File Upload Cancel Issue

    - by Frank Nimphius
    In Oracle JDeveloper 11g R1 (I did not test 11g R2) the file upload component is submitted even if users click a cancel button with immediate="true" set. Usually, immediate="true" on a command button by-passes all modle updates, which would make you think that the file upload isn't processed either. However, using a form like shown below, pressing the cancel button has no effect in that the file upload is not suppressed. <af:form id="f1" usesUpload="true">        <af:inputFile label="Choose file" id="fileup" clientComponent="true"                 value="#{FileUploadBean.file}"  valueChangeListener="#{FileUploadBean.onFileUpload}">   </af:inputFile>   <af:commandButton text="Submit" id="cb1" partialSubmit="true"                     action="#{FileUploadBean.onInputFormSubmit}"/>   <af:commandButton text="cancel" id="cb2" immediate="true"/> </af:form> The solution to this problem is a change of the event root, which you can achieve either by setting i) partialSubmit="true" on the command button, or by surrounding the form parts that should not be submitted when the cancel button is pressed with an ii) af:subform tag. i) partialSubmit solution <af:form id="f1" usesUpload="true">      <af:inputFile .../>   <af:commandButton text="Submit" .../>   <af:commandButton text="cancel" immediate="true" partialSubmit="true" .../> </af:form> ii) subform solution <af:form id="f1" usesUpload="true">   <af:subform id="sf1">     <af:inputFile ... />     <af:commandButton text="Submit" ..."/>   </af:subform>   <af:commandButton text="cancel" immediate="true" .../> </af:form> Note that the af:subform surrounds the input form parts that you want to submit when the submit button is pressed. By default, the af:subform only submits its contained content if the submit issued from within.

    Read the article

  • Oracle Solaris Crash Analysis Tool 5.3 now available

    - by user12609056
    Oracle Solaris Crash Analysis Tool 5.3 The Oracle Solaris Crash Analysis Tool Team is happy to announce the availability of release 5.3.  This release addresses bugs discovered since the release of 5.2 plus enhancements to support Oracle Solaris 11 and updates to Oracle Solaris versions 7 through 10. The packages are available on My Oracle Support - simply search for Patch 13365310 to find the downloadable packages. Release Notes General blast support The blast GUI has been removed and is no longer supported. Oracle Solaris 2.6 Support As of Oracle Solaris Crash Analysis Tool 5.3, support for Oracle Solaris 2.6 has been dropped. If you have systems running Solaris 2.6, you will need to use Oracle Solaris Crash Analysis Tool 5.2 or earlier to read its crash dumps. New Commands Sanity Command Though one can re-run the sanity checks that are run at tool start-up using the coreinfo command, many users were unaware that they were. Though these checks can still be run using that command, a new command, namely sanity, can now be used to re-run the checks at any time. Interface Changes scat_explore -r and -t option The -r option has ben added to scat_explore so that a base directory can be specified and the -t op[tion was added to enable color taggging of the output. The scat_explore sub-command now accepts new options. Usage is: scat --scat_explore [-atv] [-r base_dir] [-d dest] [unix.N] [vmcore.]N Where: -v Verbose Mode: The command will print messages highlighting what it's doing. -a Auto Mode: The command does not prompt for input from the user as it runs. -d dest Instructs scat_explore to save it's output in the directory dest instead of the present working directory. -r base_dir Instructs scat_explore to save it's under the directory base_dir instead of the present working directory. If it is not specified using the -d option, scat_explore names it's output file as "scat_explore_system_name_hostid_lbolt_value_corefile_name." -t Enable color tags. When enabled, scat_explore tags important text with colors that match the level of importance. These colors correspond to the color normally printed when running Oracle Solaris Crash Analysis Tool in interactive mode. Tag Name Definition FATAL An extremely important message which should be investigated. WARNING A warning that may or may not have anything to do with the crash. ERROR An error, usually printer with a suggested command ALERT Used to indicate something the tool discovered. INFO Purely informational message INFO2 A follow-up to an INFO tagged message REDZONE Usually used when prnting memory info showing something is in the kernel's REDZONE. N The number of the crash dump. Specifying unix.N vmcore.N is optional and not required. Example: $ scat --scat_explore -a -v -r /tmp vmcore.0 #Output directory: /tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 #Tar filename: scat_explore_oomph_833a2959_0x28800_vmcore.0.tar #Extracting crash data... #Gathering standard crash data collections... #Panic string indicates a possible hang... #Gathering Hang Related data... #Creating tar file... #Compressing tar file... #Successful extraction SCAT_EXPLORE_DATA_DIR=/tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 Sending scat_explore results The .tar.gz file that results from a scat_explore run may be sent using Oracle Secure File Transfer. The Oracle Secure File Transfer User Guide describes how to use it to send a file. The send_scat_explore script now has a -t option for specifying a to address for sending the results. This option is mandatory. Known Issues There are a couple known issues that we are addressing in release 5.4, which you should expect to see soon: Display of timestamps in threads and clock information is incorrect in some cases. There are alignment issues with some of the tables produced by the tool.

    Read the article

  • Detect Unicode Usage in SQL Column

    One optimization you can make to a SQL table that is overly large is to change from nvarchar (or nchar) to varchar (or char).  Doing so will cut the size used by the data in half, from 2 bytes per character (+ 2 bytes of overhead for varchar) to only 1 byte per character.  However, you will lose the ability to store Unicode characters, such as those used by many non-English alphabets.  If the tables are storing user-input, and your application is or might one day be used internationally, its likely that using Unicode for your characters is a good thing.  However, if instead the data is being generated by your application itself or your development team (such as lookup data), and you can be certain that Unicode character sets are not required, then switching such columns to varchar/char can be an easy improvement to make. Avoid Premature Optimization If you are working with a lookup table that has a small number of rows, and is only ever referenced in the application by its numeric ID column, then you wont see any benefit to using varchar vs. nvarchar.  More generally, for small tables, you wont see any significant benefit.  Thus, if you have a general policy in place to use nvarchar/nchar because it offers more flexibility, do not take this post as a recommendation to go against this policy anywhere you can.  You really only want to act on measurable evidence that suggests that using Unicode is resulting in a problem, and that you wont lose anything by switching to varchar/char. Obviously the main reason to make this change is to reduce the amount of space required by each row.  This in turn affects how many rows SQL Server can page through at a time, and can also impact index size and how much disk I/O is required to respond to queries, etc.  If for example you have a table with 100 million records in it and this table has a column of type nchar(5), this column will use 5 * 2 = 10 bytes per row, and with 100M rows that works out to 10 bytes * 100 million = 1000 MBytes or 1GB.  If it turns out that this column only ever stores ASCII characters, then changing it to char(5) would reduce this to 5*1 = 5 bytes per row, and only 500MB.  Of course, if it turns out that it only ever stores the values true and false then you could go further and replace it with a bit data type which uses only 1 byte per row (100MB  total). Detecting Whether Unicode Is In Use So by now you think that you have a problem and that it might be alleviated by switching some columns from nvarchar/nchar to varchar/char but youre not sure whether youre currently using Unicode in these columns.  By definition, you should only be thinking about this for a column that has a lot of rows in it, since the benefits just arent there for a small table, so you cant just eyeball it and look for any non-ASCII characters.  Instead, you need a query.  Its actually very simple: SELECT DISTINCT(CategoryName)FROM CategoriesWHERE CategoryName <> CONVERT(varchar, CategoryName) Summary Gregg Stark for the tip. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • New spreadsheet accompanying SmartAssembly 6.0 provides statistics for prioritizing bug fixes

    - by Jason Crease
    One problem developers face is how to prioritize the many voices providing input into software bugs. If there is something wrong with a function that is the darling of a particular user, he or she tends to want action - now! The developer's dilemma is how to ascertain that the problem is major or minor, and when it should be addressed. Now there is a new spreadsheet accompanying SmartAssembly that provides exactly that information in an objective manner. This might upset those used to getting their way by being the loudest or pushiest, but ultimately it will ensure that the biggest problems get the priority they deserve. Here's how it works: Feature Usage Reporting (FUR) in SmartAssembly 6.0 provides a wealth of data about how your software is used by its end-users, but in the SmartAssembly UI the data isn't mined to its full extent. The new Excel spreadsheet for FUR extracts statistics from that data and presents them in easy-to-understand forms. I developed the spreadsheet feature in Microsoft Excel, using a fair amount of VBA. The spreadsheet connects directly to the database which stores the feature-usage data, and shows a wide variety of statistics and tables extracted from that data.  You want to know what percentage of users have used the 'Export as XML' button?  No problem.  How popular is v5.3 is compared to v5.1?  There's graphs for that. You need to know whether you have more users in Russia or Brazil? There's a big pie chart for that. I recently witnessed the spreadsheet in use here at Red Gate Software. My bug is exposed as minor While testing new features in .NET Reflector, I found a usability bug in the Refresh button and filed it in the Red Gate bug-tracking system. The bug was labelled "V.NEXT MINOR," which means it would be fixed in the next point release. Although I'm a professional tester, I'm not much different than most software users when they discover a bug that affects them personally: I wanted it fixed immediately. There was an ulterior motive at play here, of course. I would get to see my colleagues put the spreadsheet to work. The Reflector team loaded up the spreadsheet to view the feature-usage statistics that SmartAssembly collected for the refresh button. The resulting statistics showed that only 8% of users have ever pressed the Refresh button, and only 2.6% of sessions involve pressing the button. When Refresh is used, it's only pressed on average 1.6 times a session, with a maximum of 8 times during a session. This was in stark contrast to what I was doing as a conscientious tester: pressing it dozens of times per session. The spreadsheet provides evidence that my bug was a minor one. On to more serious things Based on the solid evidence uncovered by the spreadsheet, the Reflector team concluded that my experience does not represent that of the vast majority of Reflector's recorded users. The Reflector team had ample data to send me back to my desk and keep the bug classified as "V.NEXT MINOR." The team then went back to fixing more serious bugs. If I'm in the shoes of the user, I might not be thoroughly happy, but I cannot deny that the evidence clearly placed me in a very small minority. Next time I'm hoping the spreadsheet will prove that my bug is more important. Find out more about Feature-Usage Reporting here. The spreadsheet is available for free download here.

    Read the article

  • Efficiently separating Read/Compute/Write steps for concurrent processing of entities in Entity/Component systems

    - by TravisG
    Setup I have an entity-component architecture where Entities can have a set of attributes (which are pure data with no behavior) and there exist systems that run the entity logic which act on that data. Essentially, in somewhat pseudo-code: Entity { id; map<id_type, Attribute> attributes; } System { update(); vector<Entity> entities; } A system that just moves along all entities at a constant rate might be MovementSystem extends System { update() { for each entity in entities position = entity.attributes["position"]; position += vec3(1,1,1); } } Essentially, I'm trying to parallelise update() as efficiently as possible. This can be done by running entire systems in parallel, or by giving each update() of one system a couple of components so different threads can execute the update of the same system, but for a different subset of entities registered with that system. Problem In reality, these systems sometimes require that entities interact(/read/write data from/to) each other, sometimes within the same system (e.g. an AI system that reads state from other entities surrounding the current processed entity), but sometimes between different systems that depend on each other (i.e. a movement system that requires data from a system that processes user input). Now, when trying to parallelize the update phases of entity/component systems, the phases in which data (components/attributes) from Entities are read and used to compute something, and the phase where the modified data is written back to entities need to be separated in order to avoid data races. Otherwise the only way (not taking into account just "critical section"ing everything) to avoid them is to serialize parts of the update process that depend on other parts. This seems ugly. To me it would seem more elegant to be able to (ideally) have all processing running in parallel, where a system may read data from all entities as it wishes, but doesn't write modifications to that data back until some later point. The fact that this is even possible is based on the assumption that modification write-backs are usually very small in complexity, and don't require much performance, whereas computations are very expensive (relatively). So the overhead added by a delayed-write phase might be evened out by more efficient updating of entities (by having threads work more % of the time instead of waiting). A concrete example of this might be a system that updates physics. The system needs to both read and write a lot of data to and from entities. Optimally, there would be a system in place where all available threads update a subset of all entities registered with the physics system. In the case of the physics system this isn't trivially possible because of race conditions. So without a workaround, we would have to find other systems to run in parallel (which don't modify the same data as the physics system), other wise the remaining threads are waiting and wasting time. However, that has disadvantages Practically, the L3 cache is pretty much always better utilized when updating a large system with multiple threads, as opposed to multiple systems at once, which all act on different sets of data. Finding and assembling other systems to run in parallel can be extremely time consuming to design well enough to optimize performance. Sometimes, it might even not be possible at all because a system just depends on data that is touched by all other systems. Solution? In my thinking, a possible solution would be a system where reading/updating and writing of data is separated, so that in one expensive phase, systems only read data and compute what they need to compute, and then in a separate, performance-wise cheap, write phase, attributes of entities that needed to be modified are finally written back to the entities. The Question How might such a system be implemented to achieve optimal performance, as well as making programmer life easier? What are the implementation details of such a system and what might have to be changed in the existing EC-architecture to accommodate this solution?

    Read the article

  • Lead, Follow, or Get out of the way

    - by Daniel Moth
    This is one of the sayings (attributed to Thomas Paine) that totally resonated with me from the first time I heard it, which was only 3 years ago during some training course at work: "Lead, Follow, or Get out of the way" You'll find many books with this title and you'll find it quoted by politicians and other leaders in various countries at various times... the quote is open to interpretation and works on many levels. To set the tone of what this means to me, I'll use a simple micro example: In any given conversation, you are either leading it or following it, at different times/snapshots of the conversation. If you are not willing or able to lead it, and you are not willing or able to follow it, then you should depart. The bad alternative which this guidance encourages you NOT to do is to stick around and obstruct progress by not following, not leading, and simply complaining or trying to derail the discussion in no particular direction. The same pattern applies at your position/role at work. Either follow your management/leadership team, or try to lead them to what you think is a better place, or change jobs. Don't stick around complaining about the direction things are going, while not actively trying to either change things or make peace with it. In the previous paragraph you can replace the word "your management" with "the people reporting to you" and the guidance still holds. Either lead your direct reports to where you think they should go, or follow their lead, or change jobs. Complaining about folks not taking direction while doing nothing is not a maintainable state. To me this quote is not about a permanent state, it is not about some people always leading and some always following: It is about a role/hat that anybody can play/wear at any given moment. One minute I am leading you, the next I am following you, and the next we are both following someone else and so on... When there is disagreement, debate the different directions for as long as it takes for you to be comfortable that you can either follow or lead. If you don't become comfortable with either of those, get out of the way. Something to remember is that it is impossible to learn how to lead well, without learning how to follow well (probably deserves its own blog entry)... Things go wrong when someone thinks that they must always be leading, or when everybody wants to follow and nobody steps up to lead... Things go wrong when more than one person wants to lead and they don't try to reach agreement on a shared direction, stubbornly sticking to their guns pulling the rest of the team in multiple directions... Things go wrong when more than one person wants to lead and after numerous and lengthy discussions, none of them decides to follow or get out of the way... Things go wrong when people don't want to lead, don't want to follow, and insist on sticking around... While there are a few ways things that can go wrong as enumerated in the previous paragraph, the most common one in my experience is the last one I mentioned. You'll recognize these folks as the ones that always complain about everything that is wrong with their company/product but do nothing about it. Every time you hear someone giving feedback on how something is wrong or suboptimal, ask them "So now that you identified the problem, what do you think the solution is and what are you doing to drive us to that solution?" The next time things start going wrong, step up and remind everyone: Lead, Follow, or Get out of the way. For more perspectives, and for input to help you form your own interpretation, search the web for this phrase to see in what contexts it is being used (bing, google). Finally, regardless of your political views, I hope you can appreciate if only as an example this perspective of someone leading by actually getting out of the way. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • BizTalk: Instance Subscription: Details

    - by Leonid Ganeline
    It has interesting behavior and it is not always what we are waiting for. An orchestration can be enlisted with many subscriptions. In other word it can have several Receive shapes. Usually the first Receive uses the Activation subscription but other Receives create the Instance subscriptions. [See “Publish and Subscribe Architecture” in MSDN] Here is a sample process. This orchestration has two receives. It is a typical Sequential Convoy. [See "BizTalk Server 2004 Convoy Deep Dive" in MSDN by Stephen W. Thomas]. Let's experiment started.   There are three typical scenarios. First scenario: everything is OK Activation subscription for the Sample message is created when the orchestration the SampleProcess is enlisted. The Instance subscription is created only when the SampleProcess orchestration instance is started and it is removed when the orchestration instance is ended. So far so good, the Message_2 was delivered exactly in this time interval and was consumed. Second scenario: no consumers Three Sample_2 messages were delivered. One was delivered before the SampleProcess was started and before the instance subscription was created. Second message was delivered in the correct time interval. The third one was delivered after the SampleProcess orchestration was ended and the instance subscription was removed. Note: ·         It was not the first Sample_2 was consumed. It was first in the queue but in was not waiting, it was suspended when it was delivered to the Message Box and didn’t have any subscribers at this moment. The first and the last Sample_2 messages were Suspended (Nonresumable) in the Message Box. For each of this message we have got two (!) service instances associated with this suspended message. One service instance has the ServiceClass of Messaging, and we can see its Error Description:   The second service instance has the ServiceClass of RoutingFailureReport, and we can see its Error Description:   Third scenario: something goes wrong Two Sample_2 messages were delivered. Both were delivered in the same interval when the SampleProcess orchestration was working and the instance subscription was created and was working too. First Sample_2 was consumed. The second Sample_2 has the subscription but the subscriber, the SampleProcess orchestration, will not consume it. After the SampleProcess orchestration is ended (And only after! I will discuss this in the next article.), it is suspended (Nonresumable). In this time only one service instance associated with this kind of scenario is suspended. This service instance has the ServiceClass of Orchestration, and we can see its Error Description: In the Message tab we will see the Sample_2 message in the Suspended (Resumable) status. Note: ·         This behavior looks ambiguous. We see here the orchestration consumes the extra message(s) and gets suspended together with those extra messages. These messages are not consumed in term of “processed by orchestration”. But they are consumed in term of the “delivered to the subscriber”. The receive shape in the orchestration is not received these extra messages. But these messages are routed to the orchestration.     Unified Sequential convoy  Now one more scenario. It is the unified sequential convoy. That means the activation subscription is for the same message type as it for the instance subscription. The Sample_2 message is now the Sample message. For simplicity the SampleProcess orchestration consumes only two Sample messages. Usually the orchestration consumes a lot of messages inside loop, but now it is only two of them. First message starts the orchestration, the second message goes inside this orchestration. Then the next pair of messages follows, and so on. But if the input messages follow in shorter intervals we have got the problem. We lost messages in unpredictable manner. Note: ·         Maybe the better behavior would be if the orchestration removes the instance subscription after the message is consumed, not in the end on the orchestration. Right now it is a “feature” of the BizTalk subscription mechanism.

    Read the article

  • Texture displays on Android emulator but not on device

    - by Rob
    I have written a simple UI which takes an image (256x256) and maps it to a rectangle. This works perfectly on the emulator however on the phone the texture does not show, I see only a white rectangle. This is my code: public void onSurfaceCreated(GL10 gl, EGLConfig config) { byteBuffer = ByteBuffer.allocateDirect(shape.length * 4); byteBuffer.order(ByteOrder.nativeOrder()); vertexBuffer = byteBuffer.asFloatBuffer(); vertexBuffer.put(cardshape); vertexBuffer.position(0); byteBuffer = ByteBuffer.allocateDirect(shape.length * 4); byteBuffer.order(ByteOrder.nativeOrder()); textureBuffer = byteBuffer.asFloatBuffer(); textureBuffer.put(textureshape); textureBuffer.position(0); // Set the background color to black ( rgba ). gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Enable Smooth Shading, default not really needed. gl.glShadeModel(GL10.GL_SMOOTH); // Depth buffer setup. gl.glClearDepthf(1.0f); // Enables depth testing. gl.glEnable(GL10.GL_DEPTH_TEST); // The type of depth testing to do. gl.glDepthFunc(GL10.GL_LEQUAL); // Really nice perspective calculations. gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); gl.glEnable(GL10.GL_TEXTURE_2D); loadGLTexture(gl); } public void onDrawFrame(GL10 gl) { gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glDisable(GL10.GL_DEPTH_TEST); gl.glMatrixMode(GL10.GL_PROJECTION); // Select Projection gl.glPushMatrix(); // Push The Matrix gl.glLoadIdentity(); // Reset The Matrix gl.glOrthof(0f, 480f, 0f, 800f, -1f, 1f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select Modelview Matrix gl.glPushMatrix(); // Push The Matrix gl.glLoadIdentity(); // Reset The Matrix gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glLoadIdentity(); gl.glTranslatef(card.x, card.y, 0.0f); gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); } public void onSurfaceChanged(GL10 gl, int width, int height) { // Sets the current view port to the new size. gl.glViewport(0, 0, width, height); // Select the projection matrix gl.glMatrixMode(GL10.GL_PROJECTION); // Reset the projection matrix gl.glLoadIdentity(); // Calculate the aspect ratio of the window GLU.gluPerspective(gl, 45.0f, (float) width / (float) height, 0.1f, 100.0f); // Select the modelview matrix gl.glMatrixMode(GL10.GL_MODELVIEW); // Reset the modelview matrix gl.glLoadIdentity(); } public int[] texture = new int[1]; public void loadGLTexture(GL10 gl) { // loading texture Bitmap bitmap; bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.image); // generate one texture pointer gl.glGenTextures(0, texture, 0); //adds texture id to texture array // ...and bind it to our array gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now // create nearest filtered texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); // Use Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); // Clean up bitmap.recycle(); } As per many other similar issues and resolutions on the web i have tried setting the minsdkversion is 3, loading the bitmap via an input stream bitmap = BitmapFactory.decodeStream(is), setting BitmapFactory.Options.inScaled to false, putting the images in the nodpi folder and putting them in the raw folder.. all of which didn't help. I'm not really sure what else to try..

    Read the article

  • AI to move custom-shaped spaceships (shape affecting movement behaviour)

    - by kaoD
    I'm designing a networked turn based 3D-6DOF space fleet combat strategy game which relies heavily on ship customization. Let me explain the game a bit, since you need to know a bit about it to set the question. What I aim for is the ability to create your own fleet of ships with custom shapes and attached modules (propellers, tractor beams...) which would give advantages and disadvantages to each ship, so you have lots of different fleet distributions. E.g., long ship with two propellers at the side would let the ship spin around that plane easily, bigger ships would move slowly unless you place lots of propellers at the back (therefore spending more "construction" points and energy when moving, and it will only move fast towards that direction.) I plan to balance all the game around this feature. The game would revolve around two phases: orders and combat phase. During the orders phase, you command the different ships. When all players finish the order phase, the combat phase begins and the ship orders get resolved in real-time for some time, then the action pauses and there's a new orders phase. The problem comes when I think about player input. To move a ship, you need to turn on or off different propellers if you want to steer, travel forward, brake, rotate in place... These propellers don't have to work at their whole power, so you can achieve more movement combinations with less propellers. I think this approach is a bit boring. The player doesn't want to fiddle with motors or anything, you just want to MOVE and KILL. The way I intend the player to give orders to these ships is by a destination and a rotation, and then the AI would calculate the correct propeller power to achive that movement and rotation. Propulsion doesn't have to be the same throught the entire turn calculation (after the orders have been given) so it would be cool if the ships reacted as they move, adjusting the power of the propellers for their needs dynamically, but it may be too hard to implement and it's not really needed for the game to work. In both cases, how would that AI decide which propellers to activate for the best (or at least not worst) trajectory to be achieved? I though about some approaches: Learning AI: The ship types would learn about their movement by trial and error, adjusting their behaviour with more uses, and finally becoming "smart". I don't want to get involved THAT far in AI coding, and I think it can be frustrating for the player (even if you can let it learn without playing.) Pre-calculated timestep movement: Upon ship creation, ALL possible movements are calculated for each propeller configuration and power for a given delta-time. Memory intensive, ugly, bad. Pre-calculated trajectories: The same as above but not for each delta-time but the whole trajectory, which would then be fitted as much as possible. Requires a fixed propeller configuration for the whole combat phase and is still memory intensive, ugly and bad. Continuous brute forcing: The AI continously checks ALL possible propeller configurations throughout the entire combat phase, precalculates a few time steps and decides which is the best one based on that. Con: what's good now might not be that good later, and it's too CPU intensive, ugly, and bad too. Single brute forcing: Same as above, but only brute forcing at the beginning of the simulation, so it needs constant propeller configuration throughout the entire combat phase. Coninuous angle check: This is not a full movement method, but maybe a way to discard "stupid" propeller configurations. Given the current propeller's normal vector and the final one, you can approximate the power needed for the propeller based on the angle. You must do this continuously throughout the whole combat phase. I figured this one out recently so I didn't put in too much thought. A priori, it has the "what's good now might not be that good later" drawback too, and it doesn't care about the other propellers which may act together to make a better propelling configuration. I'm really stuck here. Any ideas?

    Read the article

  • Pro/con of using Angular directives for complex form validation/ GUI manipulation

    - by tengen
    I am building a new SPA front end to replace an existing enterprise's legacy hodgepodge of systems that are outdated and in need of updating. I am new to angular, and wanted to see if the community could give me some perspective. I'll state my problem, and then ask my question. I have to generate several series of check boxes based on data from a .js include, with data like this: $scope.fieldMappings.investmentObjectiveMap = [ {'id':"CAPITAL PRESERVATION", 'name':"Capital Preservation"}, {'id':"STABLE", 'name':"Moderate"}, {'id':"BALANCED", 'name':"Moderate Growth"}, // etc {'id':"NONE", 'name':"None"} ]; The checkboxes are created using an ng-repeat, like this: <div ng-repeat="investmentObjective in fieldMappings.investmentObjectiveMap"> ... </div> However, I needed the values represented by the checkboxes to map to a different model (not just 2-way-bound to the fieldmappings object). To accomplish this, I created a directive, which accepts a destination array destarray which is eventually mapped to the model. I also know I need to handle some very specific gui controls, such as unchecking "None" if anything else gets checked, or checking "None" if everything else gets unchecked. Also, "None" won't be an option in every group of checkboxes, so the directive needs to be generic enough to accept a validation function that can fiddle with the checked state of the checkbox group's inputs based on what's already clicked, but smart enough not to break if there is no option called "NONE". I started to do that by adding an ng-click which invoked a function in the controller, but in looking around Stack Overflow, I read people saying that its bad to put DOM manipulation code inside your controller - it should go in directives. So do I need another directive? So far: (html): <input my-checkbox-group type="checkbox" fieldobj="investmentObjective" ng-click="validationfunc()" validationfunc="clearOnNone()" destarray="investor.investmentObjective" /> Directive code: .directive("myCheckboxGroup", function () { return { restrict: "A", scope: { destarray: "=", // the source of all the checkbox values fieldobj: "=", // the array the values came from validationfunc: "&" // the function to be called for validation (optional) }, link: function (scope, elem, attrs) { if (scope.destarray.indexOf(scope.fieldobj.id) !== -1) { elem[0].checked = true; } elem.bind('click', function () { var index = scope.destarray.indexOf(scope.fieldobj.id); if (elem[0].checked) { if (index === -1) { scope.destarray.push(scope.fieldobj.id); } } else { if (index !== -1) { scope.destarray.splice(index, 1); } } }); } }; }) .js controller snippet: .controller( 'SuitabilityCtrl', ['$scope', function ( $scope ) { $scope.clearOnNone = function() { // naughty jQuery DOM manipulation code that // looks at checkboxes and checks/unchecks as needed }; The above code is done and works fine, except the naughty jquery code in clearOnNone(), which is why I wrote this question. And here is my question: after ALL this, I think to myself - I could be done already if I just manually handled all this GUI logic and validation junk with jQuery written in my controller. At what point does it become foolish to write these complicated directives that future developers will have to puzzle over more than if I had just written jQuery code that 99% of us would understand with a glance? How do other developers draw the line? I see this all over Stack Overflow. For example, this question seems like it could be answered with a dozen lines of straightforward jQuery, yet he has opted to do it the angular way, with a directive and a partial... it seems like a lot of work for a simple problem. Specifically, I suppose I would like to know: how SHOULD I be writing the code that checks whether "None" has been selected (if it exists as an option in this group of checkboxes), and then check/uncheck the other boxes accordingly? A more complex directive? I can't believe I'm the only developer that is having to implement code that is more complex than needed just to satisfy an opinionated framework.

    Read the article

  • User password rejected on login screen but accepted on text console login

    - by MadsirR
    I had to force shutdown my Ubuntu 12.04 64-bit, after which I restarted and tried to log in as a normal user, which was rejected several times. I then logged in as guest and tty to my regular account with use of my normal password, which succeeded. (So the password is still valid.) How can I gain access again via the normal login procedure (welcome screen)? Update: When I tried to log on with my new password, it again was denied. When I deliberately tried to log on with a faulty password, an error message came back, saying: Access denied - wrong password. I suppose, the first time the password was not rejected, but the procedure was aborted for some reason. Some additional info after trying to find a solution: I am conviced it is a Compiz-issue. Why? before this happened, all sessions came to a grinding halt, regardless of being logged on in a 2D or 3d environment. I found a link saying that I should remove Compiz and proceed in a 2D environment, which initiall worked without a glitch, until my system went into a state of total obivium. Only after that, the above mentioned troubles appeared. In the meantime I have happened to find a thread with reference 17381, describing exactly what I have experienced. For now, I will try to cure this situation (later this week) and revert with the results, hopefully to close this post. In the meantime I cordially thank you all, even if you didn't kill the problem; you gave me the inspiration to look further and find a possible cure. Update2: After 15 hrs of trial-and-error I callled it quits (When I decided to tackle this problem, I've given myself 12 hrs, to avoid massive loss of time.) I decided to re-install Precise, since the "point 1" version has become availabe. Log-in is back to normal, as is the graphic environment. Response to mouse input is stil appalling, especially when I have a series of screens open as "children"of a "parent" screen. It still completely locks up. I have installed Enlightenment, Gnome classic, Gnome 3, Cinnamon and they all behave in a similar fashion. FOR THOSE WHO NEED A WAY-OUT IN SITUATIONS OF THE LIKE: Open a terminal with [Ctrl+Alt+F2]. Type [sudo killall Firefox] (or whatever application you wish to terminate). Key in your password. Return to your graphical screen with [Ctrl+Alt+F7], and Bob's your uncle. Just re-open Firefox like nothing happened. Next time you are stuck: [Ctrl+Alt+F2], upward arrow till you meet the command of your desire, [Ctrl+Alt+F7], etcetera. Hope this is of help. My next move will be to upgrade the kernel to 3.4 from the repositories for 12.10. However, since this entails a totally new situation, I will start a new thread on this site to avoid topic pollution I will keep you posted. Still.

    Read the article

  • When row estimation goes wrong

    - by Dave Ballantyne
    Whilst working at a client site, I hit upon one of those issues that you are not sure if that this is something entirely new or a bug or a gap in your knowledge. The client had a large query that needed optimizing.  The query itself looked pretty good, no udfs, UNION ALL were used rather than UNION, most of the predicates were sargable other than one or two minor ones.  There were a few extra joins that could be eradicated and having fixed up the query I then started to dive into the plan. I could see all manor of spills in the hash joins and the sort operations,  these are caused when SQL Server has not reserved enough memory and has to write to tempdb.  A VERY expensive operation that is generally avoidable.  These, however, are a symptom of a bad row estimation somewhere else, and when that bad estimation is combined with other estimation errors, chaos can ensue. Working my way back down the plan, I found the cause, and the more I thought about it the more i came convinced that the optimizer could be making a much more intelligent choice. First step is to reproduce and I was able to simplify the query down a single join between two tables, Product and ProductStatus,  from a business point of view, quite fundamental, find the status of particular products to show if ‘active’ ,’inactive’ or whatever. The query itself couldn’t be any simpler The estimated plan looked like this: Ignore the “!” warning which is a missing index, but notice that Products has 27,984 rows and the join outputs 14,000. The actual plan shows how bad that estimation of 14,000 is : So every row in Products has a corresponding row in ProductStatus.  This is unsurprising, in fact it is guaranteed,  there is a trusted FK relationship between the two columns.  There is no way that the actual output of the join can be different from the input. The optimizer is already partly aware of the foreign key meta data, and that can be seen in the simplifiction stage. If we drop the Description column from the query: the join to ProductStatus is optimized out. It serves no purpose to the query, there is no data required from the table and the optimizer knows that the FK will guarantee that a matching row will exist so it has been removed. Surely the same should be applied to the row estimations in the initial example, right ?  If you think so, please upvote this connect item. So what are our options in fixing this error ? Simply changing the join to a left join will cause the optimizer to think that we could allow the rows not to exist. or a subselect would also work However, this is a client site, Im not able to change each and every query where this join takes place but there is a more global switch that will fix this error,  TraceFlag 2301. This is described as, perhaps loosely, “Enable advanced decision support optimizations”. We can test this on the original query in isolation by using the “QueryTraceOn” option and lo and behold our estimated plan now has the ‘correct’ estimation. Many thanks goes to Paul White (b|t) for his help and keeping me sane through this

    Read the article

  • Graphics trouble after resuming from hibernate or suspend

    - by Voyagerfan5761
    I have a Dell Inspiron 2650 (with NVidia graphics, using nouveau drivers) that I'm using to try out Ubuntu. It's all great, except that Hibernate and Suspend aren't usable. Yes, I know that questions about power-save issues are rampant in the Linux support universe, but it seems that every time I find a solution it's for a very specific hardware combination and doesn't apply to me. So anyway, here goes. When I resume from either power-saving mode, I'll get graphics problems anywhere on the range from a few scattered random-colored pixels that won't change; all the way to full-screen patterns that don't change as I move the mouse, hit keys on the keyboard, or even bring up the shutdown dialog using the power button. Those full-screen issues (which may involve stripes with random pixels, partial black screen, or both) always end in me forcing the machine to shut down by holding the power button. I haven't done much testing yet to determine what severity level is most commonly associated with each mode, but I do avoid using either power-save option because of these issues. I'll add info on my hardware as I can gather it (no home internet connection, and this laptop is tethered to my desk by a dead battery and casing degradation). Please feel free to request something specific in the question comments. Hardware Info See this hardinfo report for my system's hardware configuration. (No, my username is not "myuser"; I sanitized hardinfo's output before publishing it.) Screenshots These screenshots are from a relatively mild occurrence, which happened after the second hibernation I took that session. The first one worked great, though I used the wireless card and Firefox heavily between the two hibernation attempts. Take a look at what happened when I opened my home directory in Nautilus and scrolled it: See below for the situations I've tested so far. The real trouble comes when the machine resumes to an unusable state; in such cases I can't even unlock the screen or properly reboot, much less take a screenshot. I have a hunch that putting a CD in the drive will cause such major failures, and I will try that at some point; see related question. Situations Tested Maverick (10.10) Suspend Seems to suspend nicely with nothing running Seems to suspend nicely with flash drive plugged in On resume from suspend with no flash drive, Terminal and gedit running: Funky graphics on top of log output, then blank screen with pixelated cursor; no response to power button (normally will shutdown 60 seconds later) Hibernate Seems to hibernate nicely with nothing running Seems to hibernate nicely with a few apps (Terminal, Mouse preferences) running Seems to not hibernate when flash drive plugged in Seems to not hibernate when System Monitor is running Have encountered failed hibernation (after several hours and one successful hibernate/thaw cycle) with no external media connected and no programs running except normal background stuff Natty LiveCD (11.04_2010-12-22) When I tested it, Natty wouldn't stay logged in. It played part of the login sound and then [ OK ] appeared in the top right corner (white-on-black terminal text) for a few seconds. Then it kicked me back to the Unlock screen. It did that four times before I gave up and just tested suspend from the Unlock screen. Suspend Resumed to vertical gray and black lines 2px (?) wide, then shifted to vertical "jail bars" of black over a black screen with above-described random pixels and mouse pointer. No apparent response to input from mouse (clicking randomly). Keyboard and touchpad unrecognized.

    Read the article

  • Teminal non-responsive on load, can't enter anything until CTRL+C

    - by Silver Light
    Hello! I have an issue with terminal in Ubuntu 10.04. When I launch it, it hangs, like this: I cannot do anything until I press CTRL+C: I cannot remember when this started. What can be wrong? Looks like teminal is loading or processing something each time it loads. How can I diagnose and solve this problem? EDIT: Here are the conents of ~/.bashrc: # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines in the history. See bash(1) for more options # ... or force ignoredups and ignorespace HISTCONTROL=ignoredups:ignorespace # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # set a fancy prompt (non-color, unless we know we "want" color) case "$TERM" in xterm-color) color_prompt=yes;; esac # uncomment for a colored prompt, if the terminal has the capability; turned # off by default to not distract the user: the focus in a terminal window # should be on the output of commands, not on the prompt #force_color_prompt=yes if [ -n "$force_color_prompt" ]; then if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then # We have color support; assume it's compliant with Ecma-48 # (ISO/IEC-6429). (Lack of such support is extremely rare, and such # a case would tend to support setf rather than setaf.) color_prompt=yes else color_prompt= fi fi if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ ' else PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' fi unset color_prompt force_color_prompt # If this is an xterm set the title to user@host:dir case "$TERM" in xterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1" ;; *) ;; esac # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' #alias dir='dir --color=auto' #alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # some more ls aliases alias ll='ls -alF' alias la='ls -A' alias l='ls -CF' # Add an "alert" alias for long running commands. Use like so: # sleep 10; alert alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi # Source .profile if [ -f ~/.profile ]; then . ~/.profile fi Setting -x at the beginning showed me that it tries to repeat this without stopping: +++++++++++++++++++ '[' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' '!=' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' ']' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line=' acroread gpdf xpdf' +++++++++++++++++++ list=("${list[@]}" $line) +++++++++++++++++++ read line

    Read the article

  • Best triple head display setup

    - by dgel
    I'm currently running Ubuntu 12.04 with a darn good triple head display setup. I've got a VisionTek 900530 Radeon HD 5450 512MB DDR3 PCI Express video card that has two DVI outputs and one Mini DisplayPort that I have connected to a HDMI adapter. I'm running three identical Asus 1920x1080 monitors that each have a DVI, VGA, and HDMI input. I'm using the xorg-edgers ppa, so I'm using the open source radeon driver version 6.99.99. I tried using the ATI binary fglrx driver, but I wasn't able to get the three monitors working properly- the monitor connected via HDMI / DisplayPort wouldn't run at full resolution. The setup is almost perfect: Compiz runs fine and is quite snappy. I'm not able to use that great compiz feature where you can drag a window to the side of a display and it will half maximize. I occasionally experience display corruption weirdness with Unity and need to restart it. When I use a dropdown menu in LibreOffice it often pops the menu down in another window. For example, if I'm using the center monitor and click the Insert menu, the menu pulls down on the monitor to my right, forcing me to chase it. If I chase down the menu and choose Manual Break, the dialog appears over on my left monitor. This absurdity is mildly entertaining but has lost its novelty. I've decided to build a new system and have spared no expense- latest i7 processor, SSD, etc. I really like the performance of the Nvidia binary drivers, so I put two ZOTAC ZT-40707-10L GeForce GT 440 in the system, figuring I'd have four DVI outputs and an awesome triple (or even eventually quad) head setup. Unfortunately it appears that I didn't do sufficient research before my purchase. It seems that Nvidia TwinView only supports two monitors on one card (I guess that's why they call it TwinView...). I messed around with running two X servers, but I really don't want that- being able to drag windows to any monitor is critical. It doesn't sound like Xinerama is an option because from what I understand it simply doesn't support Compiz. I've seen a BaseMosaic option that can be used with the Nvidia drivers that appears to support an almost unlimited number of heads- unfortunately me cheap little cards don't support it. I'm also not sure whether you'll still have all nice maximizing and snapping that TwinView provides, or whether Ubuntu will only see it as one massive display. I put my old trusty ATI card into my new system and installed 12.10. I'm using the opensource radeon drivers again because even in 12.10 I can't get the fglrx binary drivers to do triple head. Unfortunately, even with an unbelievably powerful system the experience is extremely sluggish (much more so than my experience in 12.04). The menu scattering problem appears to be fixed, but I get a lot of nasty Unity display corruption. So finally, my question is this: What hardware / drivers should I use? I'm willing to buy (almost) any video card(s). I have two PCI-Express 3.0 slots on my motherboard (which has an integrated Intel HD card). I'm willing to use ATI or Nvidia cards and willing to run Ubuntu 12.04.1 or 12.10. I'm not a gamer, but do want beautiful and snappy Compiz effects. Does anyone out there have the perfect triple head setup in 12.04 or 12.10? What hardware / drivers are you using? I have those two Nvidia cards but will probably be returning them unless someone knows a way to use them together for a triple head setup. Since I'm having pretty good luck with a single ATI card providing three displays, should I just buy a beefier one with the hopes that it will fix the horrible sluggishness I'm experiencing in 12.10?

    Read the article

  • Q&A: Drive Online Engagement with Intuitive Portals and Websites

    - by kellsey.ruppel
    We had a great webcast yesterday and wanted to recap the questions that were asked throughout. Can ECM distribute contents to 3rd party sites?ECM, which is now called WebCenter Content can distribute content to 3rd party sites via several means as well as SSXA - Site Studio for External Applications. Will you be able to provide more information on these means and SSXA?If you have an existing JSP application, you can add the SSXA libraries to your IDE where your application was built (JDeveloper for example).  You can now drop some code into your 3rd party site/application that can both create and pull dynamically contributable content out of the Content Server for inclusion in your pages.   If the 3rd party site is not a JSP application, there is also the option of leveraging two Site Studio (not SSXA) specific custom WebCenter Content services to pull Site Studio XML content into a page. More information on SSXA can be found here: http://docs.oracle.com/cd/E17904_01/doc.1111/e13650/toc.htm Is there another way than a ”gadget” to integrate applications (like loan simulator) in WebCenter Sites?There are some other ways such as leveraging the Pagelet Producer, which is a core component of WebCenter Portal. Oracle WebCenter Portal's Pagelet Producer (previously known as Oracle WebCenter Ensemble) provides a collection of useful tools and features that facilitate dynamic pagelet development. A pagelet is a reusable user interface component. Any HTML fragment can be a pagelet, but pagelet developers can also write pagelets that are parameterized and configurable, to dynamically interact with other pagelets, and respond to user input. Pagelets are similar to portlets, but while portlets were designed specifically for portals, pagelets can be run on any web page, including within a portal or other web application. Pagelets can be used to expose platform-specific portlets in other web environments. More on Page Producer can be found here:http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_pagelet.htm#CHDIAEHG Can you describe the mechanism available to achieve the context transfer of content?The primary goal of context transfer is to provide a uniform experience to customers as they transition from one channel to another, for instance in the use-case discussed in the webcast, it was around a customer moving from the .com marketing website to the self-service site where the customer wants to manage his account information. However if WebCenter Sites was able to identify and segment the customers  to a specific category where the customer is a potential target for some promotions, the same promotions should be targeted to the customer when he is in the self-service site, which is managed by WebCenter Portal. The context transfer can be achieved by calling out the WebCenter Sites Engage Server API’s, which will identify the segment that the customer has been bucketed into. Again through REST API’s., WebCenter Portal can then request WebCenter Sites for specific content that needs to be targeted for a customer for the identified segment. While this integration can be achieved through custom integration today, Oracle is looking into productizing this integration in future releases.  How can context be transferred from WebCenter Sites (marketing site) to WebCenter Portal (Online services)?WebCenter Portal Personalization server can call into WebCenter Sites Engage Server to identify the segment for the user and then through REST API’s request specific content that needs to be surfaced in the Portal. Still have questions? Leave them in the comments section! And you can catch a replay of the webcast here.

    Read the article

  • Q&A: Drive Online Engagement with Intuitive Portals and Websites

    - by kellsey.ruppel
    We had a great webcast yesterday and wanted to recap the questions that were asked throughout. Can ECM distribute contents to 3rd party sites?ECM, which is now called WebCenter Content can distribute content to 3rd party sites via several means as well as SSXA - Site Studio for External Applications. Will you be able to provide more information on these means and SSXA?If you have an existing JSP application, you can add the SSXA libraries to your IDE where your application was built (JDeveloper for example).  You can now drop some code into your 3rd party site/application that can both create and pull dynamically contributable content out of the Content Server for inclusion in your pages.   If the 3rd party site is not a JSP application, there is also the option of leveraging two Site Studio (not SSXA) specific custom WebCenter Content services to pull Site Studio XML content into a page. More information on SSXA can be found here: http://docs.oracle.com/cd/E17904_01/doc.1111/e13650/toc.htm Is there another way than a ”gadget” to integrate applications (like loan simulator) in WebCenter Sites?There are some other ways such as leveraging the Pagelet Producer, which is a core component of WebCenter Portal. Oracle WebCenter Portal's Pagelet Producer (previously known as Oracle WebCenter Ensemble) provides a collection of useful tools and features that facilitate dynamic pagelet development. A pagelet is a reusable user interface component. Any HTML fragment can be a pagelet, but pagelet developers can also write pagelets that are parameterized and configurable, to dynamically interact with other pagelets, and respond to user input. Pagelets are similar to portlets, but while portlets were designed specifically for portals, pagelets can be run on any web page, including within a portal or other web application. Pagelets can be used to expose platform-specific portlets in other web environments. More on Page Producer can be found here: http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_pagelet.htm#CHDIAEHG Can you describe the mechanism available to achieve the context transfer of content?The primary goal of context transfer is to provide a uniform experience to customers as they transition from one channel to another, for instance in the use-case discussed in the webcast, it was around a customer moving from the .com marketing website to the self-service site where the customer wants to manage his account information. However if WebCenter Sites was able to identify and segment the customers  to a specific category where the customer is a potential target for some promotions, the same promotions should be targeted to the customer when he is in the self-service site, which is managed by WebCenter Portal. The context transfer can be achieved by calling out the WebCenter Sites Engage Server API’s, which will identify the segment that the customer has been bucketed into. Again through REST API’s., WebCenter Portal can then request WebCenter Sites for specific content that needs to be targeted for a customer for the identified segment. While this integration can be achieved through custom integration today, Oracle is looking into productizing this integration in future releases.  How can context be transferred from WebCenter Sites (marketing site) to WebCenter Portal (Online services)?WebCenter Portal Personalization server can call into WebCenter Sites Engage Server to identify the segment for the user and then through REST API’s request specific content that needs to be surfaced in the Portal. Still have questions? Leave them in the comments section! And you can catch a replay of the webcast here.

    Read the article

  • Teminal hands on load, can't enter anything until CTRL+C

    - by Silver Light
    Hello! I have an issue with terminal in Ubuntu 10.04. When I launch it, it hangs, like this: I cannot do anything until I press CTRL+C: I cannot remember when this started. What can be wrong? Looks like teminal is loading or processing something each time it loads. How can I diagnose and solve this problem? EDIT: Here are the conents of ~/.bashrc: # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines in the history. See bash(1) for more options # ... or force ignoredups and ignorespace HISTCONTROL=ignoredups:ignorespace # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # set a fancy prompt (non-color, unless we know we "want" color) case "$TERM" in xterm-color) color_prompt=yes;; esac # uncomment for a colored prompt, if the terminal has the capability; turned # off by default to not distract the user: the focus in a terminal window # should be on the output of commands, not on the prompt #force_color_prompt=yes if [ -n "$force_color_prompt" ]; then if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then # We have color support; assume it's compliant with Ecma-48 # (ISO/IEC-6429). (Lack of such support is extremely rare, and such # a case would tend to support setf rather than setaf.) color_prompt=yes else color_prompt= fi fi if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ ' else PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' fi unset color_prompt force_color_prompt # If this is an xterm set the title to user@host:dir case "$TERM" in xterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1" ;; *) ;; esac # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' #alias dir='dir --color=auto' #alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # some more ls aliases alias ll='ls -alF' alias la='ls -A' alias l='ls -CF' # Add an "alert" alias for long running commands. Use like so: # sleep 10; alert alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi # Source .profile if [ -f ~/.profile ]; then . ~/.profile fi Setting -x at the beginning showed me that it tries to repeat this without stopping: +++++++++++++++++++ '[' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' '!=' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' ']' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line=' acroread gpdf xpdf' +++++++++++++++++++ list=("${list[@]}" $line) +++++++++++++++++++ read line

    Read the article

< Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >