Search Results

Search found 36081 results on 1444 pages for 'object expected'.

Page 629/1444 | < Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >

  • Is this a dual monitor reset bug?

    - by Tentresh
    My two displays are: Intel GMA x4500 Laptop (1280x800 native resolution of the built-in display) External display (1920x1080) A few minutes after I login to my dual monitor setup, it gets reset to mirror screens. If I restore the settings via the displays application, everything is fine. On each reset, the following messages are written into /var/log/Xorg.0.log: [ 60.852] (II) PM Event received: Capability Changed [ 60.852] I830PMEvent: Capability change [ 132.920] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 132.920] (II) intel(0): Printing DDC gathered Modelines: [ 132.920] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 134.228] (II) intel(0): Allocated new frame buffer 1280x800 stride 5120, tiled Whereas right on startup or manual resolution reset, /var/log/Xorg.0.log reports the expected frame buffer allocation: [ 1562.382] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 1562.382] (II) intel(0): Printing DDC gathered Modelines: [ 1562.382] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 1576.740] (II) intel(0): Allocated new frame buffer 3200x1080 stride 12800, tiled Is Ubuntu 12.04 not compatible with my video card? Can this be solved within Ubuntu? I like its interface, but manually fiddling with resolution on every login is not bearable.

    Read the article

  • KISS principle applied to programming language design?

    - by Giorgio
    KISS ("keep it simple stupid", see e.g. here) is an important principle in software development, even though it apparently originated in engineering. Citing from the wikipedia article: The principle is best exemplified by the story of Johnson handing a team of design engineers a handful of tools, with the challenge that the jet aircraft they were designing must be repairable by an average mechanic in the field under combat conditions with only these tools. Hence, the 'stupid' refers to the relationship between the way things break and the sophistication available to fix them. If I wanted to apply this to the field of software development I would replace "jet aircraft" with "piece of software", "average mechanic" with "average developer" and "under combat conditions" with "under the expected software development / maintenance conditions" (deadlines, time constraints, meetings / interruptions, available tools, and so on). So it is a commonly accepted idea that one should try to keep a piece of software simple stupid so that it easy to work on it later. But can the KISS principle be applied also to programming language design? Do you know of any programming languages that have been designed specifically with this principle in mind, i.e. to "allow an average programmer under average working conditions to write and maintain as much code as possible with the least cognitive effort"? If you cite any specific language it would be great if you could add a link to some document in which this intent is clearly expressed by the language designers. In any case, I would be interested to learn about the designers' (documented) intentions rather than your personal opinion about a particular programming language.

    Read the article

  • How to avoid throwing vexing exceptions?

    - by Mike
    Reading Eric Lippert's article on exceptions was definitely an eye opener on how I should approach exceptions, both as the producer and as the consumer. However, I'm still struggling to define a guideline regarding how to avoid throwing vexing exceptions. Specifically: Suppose you have a Save method that can fail because a) Somebody else modified the record before you, or b) The value you're trying to create already exists. These conditions are to be expected and not exceptional, so instead of throwing an exception you decide to create a Try version of your method, TrySave, which returns a boolean indicating if the save succeeded. But if it fails, how will the consumer know what was the problem? Or would it be best to return an enum indicating the result, kind of Ok/RecordAlreadyModified/ValueAlreadyExists? With integer.TryParse this problem doesn't exist, since there's only one reason the method can fail. Is the previous example really a vexing situation? Or would throwing an exception in this case be the preferred way? I know that's how it's done in most libraries and frameworks, including the Entity framework. How do you decide when to create a Try version of your method vs. providing some way to test beforehand if the method will work or not? I'm currently following these guidelines: If there is the chance of a race condition, then create a Try version. This prevents the need for the consumer to catch an exogenous exception. For example, in the Save method described before. If the method to test the condition pretty much would do all that the original method does, then create a Try version. For example, integer.TryParse(). In any other case, create a method to test the condition.

    Read the article

  • Sound in Ubuntu 12.10 on Chromebook

    - by Paul Mennega
    I've got an HP Chromebook 14 that I've rooted and installed Ubuntu via Crouton (upgraded to 12.10) and most things are working nicely. One issue that I am having trouble solving relates to sound. I've got sound output that I can control via the individual app (e.g. Youtube), but when trying to control the volume via the built-in xfce4-mixer, I get the following error: GStreamer was unable to detect any sound devices. Some sound system specific GStreamer packages may be missing. It may also be a permissions problem. Opening a terminal, I can run the alsamixer and control the volume levels. It is reporting back that the card is CRAS. I'd love to be able to do it from XFCE and the built-in mixer. I've tried re-installing the GStreamer packages, but no luck. It seems like I am close but not sure where to go from here. On a maybe unrelated note, plugging headphones into the headphone jack in Ubuntu also does not stop the sound from coming from the speakers and nothing comes through the headphones. Switching back to ChromeOs, everything works as expected, so I think that it's a driver issue.

    Read the article

  • MQTT, GWT, ActiveMQ stack to bring jms to the browser

    - by scphantm
    I am in the preliminary stages of architecting a legacy replacement project. They already have sub half second performance on their green screens and they want the same on their web app. We have a 390 mainframe that can handle anything we throw at it but they don't have a good jvm for it, so we have two tiers of websphere servers between the mainframe and the browser, The ui server, and the bl server. For the ui, I'm leaning towards GWT. But one thing that I think would seal the deal is to add messaging capabilities to the browser. The idea is say you click on a link that displays a second panel of information, instead of the classic GWT where it triggers a GWT-RPC call to the ui server, the ui server routs it to the bl server, the bl sends it to the mainframe and back out, it drops an MQTT message directly to the bl server or directly to the mainframe. Say writes go to the bl, reads go to the mainframe. This is an easy enough thing in classic jms because you can issue a message that has an expected response. Then have your callback ready to get the resonse. But from what I'm reading so far. It looks like mqtt doesn't have that. It looks like it's strictly fire and forget, which would make it really tough to come up with a way to get a response back to the workstation that called it. Am I right here? Has anyone tried this stack before with gwt.

    Read the article

  • Nested entities in Google App Engine. Do I do it right?

    - by Aleksandr Makov
    Trying to make most of the GAE Datastore entities concept, but some doubts drill my head. Say I have the model: class User(ndb.Model): email = ndb.StringProperty(indexed=True) password = ndb.StringProperty(indexed=False) first_name = ndb.StringProperty(indexed=False) last_name = ndb.StringProperty(indexed=False) created_at = ndb.DateTimeProperty(auto_now_add=True) @classmethod def key(cls, email): return ndb.Key(User, email) @classmethod def Add(cls, email, password, first_name, last_name): user = User(parent=cls.key(email), email=email, password=password, first_name=first_name, last_name=last_name) user.put() UserLogin.Record(email) class UserLogin(ndb.Model): time = ndb.DateTimeProperty(auto_now_add=True) @classmethod def Record(cls, user_email): login = UserLogin(parent=User.key(user_email)) login.put() And I need to keep track of times of successful login operations. Each time user logs in, an UserLogin.Record() method will be executed. Now the question — do I make it right? Thanks. EDIT 2 Ok, used the typed arguments, but then it raised this: Expected Key instance, got User(key=Key('User', 5418393301680128), created_at=datetime.datetime(2013, 6, 27, 10, 12, 25, 479928), email=u'[email protected]', first_name=u'First', last_name=u'Last', password=u'password'). It's clear to understand, but I don't get why the docs are misleading? They implicitly propose to use: # Set Employee as Address entity's parent directly... address = Address(parent=employee) But Model expects key. And what's worse the parent=user.key() swears that key() isn't callable. And I found out the user.key works. EDIT 1 After reading the example form the docs and trying to replicate it — I got type error: TypeError('Model constructor takes no positional arguments.'). This is the exacto code used: user = User('[email protected]', 'password', 'First', 'Last') user.put() stamp = UserLogin(parent=user) stamp.put() I understand that Model was given the wrong argument, BUT why it's in the docs?

    Read the article

  • How do I make money from my FOSS while staying anonymous?

    - by user21007
    Let's say that: You have created a FOSS project that other people find useful, perhaps useful enough to donate to or pay for modifications to be done. It is a perfectly legitimate and innocuous software project. It has nothing to do with cryptography as munitions, p2p music, or anything likely to lead to a search warrant or being sued. You want your involvement to stay anonymous or pseudonymous. You would like to receive some money for your efforts, if people are willing. Is that possible, and if so, how could it be done? When I talk about anonymity, I realize that it is necessary to define the extent. I am not talking about Wikileaks style 20 layers of proxies worth of anonymity. I would expect a 3 letter agency to be able to identify the person easily. What is wanted is shielding from commercial competitors or random people, who would not be expected to be able to get the financial intermediary to divulge your details just by asking for them. Why would you want to stay anonymous? I can think of several valid reasons, maybe you operate a stealth mode startup and don't want to give your competitors clues as to the technology you are using. Maybe it is a project that has nothing to do with your daily job, is not developed there, but the company you work for has an unfair (and possibly unenforceable) policy stating that any coding you do is owned by them. Maybe you just value your privacy. For what it's worth, you intend to pay the relevant taxes in your country on any donations.

    Read the article

  • libgdx actors and instant actions

    - by vaati
    I'm having trouble with actors and actions. I have a list of actors, they all have either no action, or 1 sequence action This sequence action has either : a couple of actions (some are instant, some have duration 0) a couple of actions followed by a parallel action. My problem is the following: some of the instant actions are used to set the position and the alpha of the actor. So when one of the action is "move to x,y and set alpha to 0" the actor is visible for one frame at position 0,0 , move instantly to x,y for the next frame, and then disappears. Though this behaviours is to be expected, I want to avoid it. How can I achieve that? I tried to intercept the actions before I put actors in the stage but I need the stage width/height for some actions. So something like : Action actionSequence = actor.getActions().get(0); Array<Action> actions = ((SequenceAction) actionSequence).getActions(); for(Action act : actions){ if(act.act(0)) System.out.println("action " + act.toString() + " successfully run"); else System.out.println("action " + act.toString() + " wasn't instant"); } won't work. It gets even more complicated when an actor can also have a repeat action in stead of the sequence action (because you have to only run the actions that have duration 0 once without repeat, and then start the repeat). Any help is appreciated.

    Read the article

  • Strange and erratic transformations when using OpenGL VBOs to render scene

    - by janoside
    I have an existing iOS game with fairly simple scenes (all textured quads) and I'm using Apple's "Texture2D" class. I'm trying to convert this class to use VBOs since the vertices of my objects basically never change so I may as well not re-create them for every object every frame. I have the scene rendering using VBOs but the sizes and orientations of all rendered objects are strange and erratic - though locations seem generally correct. I've been toying with this code for a few days now, and I've found something odd: if I re-create all of my VBOs each frame, everything looks correct, even though I'm almost certain my vertices are not changing. Other notes I'm basing my work on this tutorial, and therefore am also using "IBOs" I create my buffers before rendering begins My buffers include vertex and texture data I'm using OpenGL ES 1.1 Fearing some strange effect of the current matrix GL state at the time of buffer creation I've also tried wrapping my buffer-setup code in a "pushMatrix-loadIdentity-popMatrix" block which (as expected) had no effect I'm aware that various articles have been published demonstrating that VBOs may not help performance, but I want to understand this problem and at least have the option to use them. I realize this is a shot in the dark, but has anyone else experienced this type of strange behavior? What might I be doing to result in this behavior? It's rather difficult for me to isolate the problem since I'm working in an existing, moderately complex project, so suggestions about how to approach the problem are also quite welcome.

    Read the article

  • Custom daemon script: works, but does not run at boot / startup

    - by pearjoint
    this is Ubuntu 10.10 Maverick. I have the following shell script in init.d that I want to run as a "daemon" (background service with start/stop/restart really) at system startup. There is a symlink in rc3.d. I tried 4 and 5 too. (Ideally this would initialize before graphical login happens and before a user logs in.) IMPORTANT: the script works 100% as expected and required when testing this with service MetaLeapDaemon start and service MetaLeapDaemon stop. (This shell script calls a Python program which makes sure the appropriate .pid files are both created at startup and deleted at exit.) So generally it works fine but now my only issue is why it will not be run at any of the run-levels I tried. I know for sure it isn't run because the log file it normally creates does not get created. As you can see (by the lack of any uid:gid args in the start-stop-daemon commands) this would currently run only under root, is this forbidden in a default setup? Here's the script, pretty much your run-off-the-mill daemon script really: #! /bin/sh DAEMON=/opt/metaleap/_core/daemon/MetaLeapDaemon.py NAME=MetaLeapDaemon DESC="MetaLeapDaemon" test -f $DAEMON || exit 0 set -e case "$1" in start) start-stop-daemon --start --pidfile /var/run/$NAME.pid --exec $DAEMON ;; stop) start-stop-daemon --stop --pidfile /var/run/$NAME.pid ;; restart) start-stop-daemon --stop --pidfile /var/run/$NAME.pid sleep 1 start-stop-daemon --start --pidfile /var/run/$NAME.pid --exec $DAEMON ;; *) N=/etc/init.d/$NAME echo "Usage: $N {start|stop|restart}" >&2 exit 1 ;; esac exit 0

    Read the article

  • Database Backup History From MSDB in a pivot table

    - by steveh99999
    I knocked up a nice little query to display backup history for each database in a pivot table format.I wanted to display the most recent full, differential, and transaction log backup for each database. Here's the SQL :-WITH backupCTE AS (SELECT name, recovery_model_desc, d AS 'Last Full Backup', i AS 'Last Differential Backup', l AS 'Last Tlog Backup' FROM ( SELECT db.name, db.recovery_model_desc,type, backup_finish_date FROM master.sys.databases db LEFT OUTER JOIN msdb.dbo.backupset a ON a.database_name = db.name WHERE db.state_desc = 'ONLINE' ) AS Sourcetable   PIVOT (MAX (backup_finish_date) FOR type IN (D,I,L) ) AS MostRecentBackup ) SELECT * FROM backupCTE Gives output such as this :-  With this query, I can then build up some straightforward queries to ensure backups are scheduled and running as expected -For example, the following logic can be used ;-  - WHERE [Last Full Backup] IS NULL) - ie database has never been backed up.. - WHERE [Last Tlog Backup] < DATEDIFF(mm,GETDATE(),-60) AND recovery_model_desc <> 'SIMPLE') - transction log not backed up in last 60 minutes. - WHERE [Last Full Backup] < DATEDIFF(dd,GETDATE(),-1) AND [Last Differential Backup] < [Last Full Backup]) -- no backup in last day.- WHERE [Last Differential Backup] < DATEDIFF(dd,GETDATE(),-1) AND [Last Full Backup] < DATEDIFF(dd,GETDATE(),-8) ) -- no differential backup in last day when last full backup is over 8 days old.   

    Read the article

  • SSMS Tools Pack 2.5.3 is out with bug fixes and improved licensing

    - by Mladen Prajdic
    Licensing for SSMS Tools Pack 2.5 has been quite a hit and I received some awesome feedback. The version 2.5.3 contains a few bug fixes and desired licensing improvements. Changes include more licensing options, prices in Euros because of book keeping reasons (don't you just love those :)) and generally easier purchase and licensing process for users. Licensing now offers four options: Per machine license. (€25) Perfect if you do all your work from a single machine. Plus one OS reinstall activation. Personal license (€75) Up to 4 machine activations. Plus 2 OS reinstall activations and any number of virtual machine activations. Team license (€240) Up to 10 machine activations. Plus 4 OS reinstall activations and any number of virtual machine activations. Enterprise license (€350+) For more than 10 machine activations any number of virtual machine activations. 30 days license. Time based demo license bound to a machine. You can view all the details on the Licensing page . If you want to receive email notifications when new version of SSMS Tools Pack is out you can do that on the Main page or on the Download page . Version 2.7 is expected in the first half of February and won't support SSMS 2005 and 2005 Express anymore. Enjoy it!

    Read the article

  • Unable to mount an LVM Hard-drive after upgrade

    - by Bruce Staples
    I imagine this is a basic gotcha ... but I can't see it. I have a system with 2(physical) harddrives. The boot system (/dev/sda) was running 10.04 & the second drive (/dev/sdb) was just a mounted filesystem. I did a clean load of Ubuntu 12.04 overwriting /dev/sda (not an upgrade) & now cannot mount the second drive. so I do not know what to enter it into the fstab ... I had expected to use: /dev/sdb /tera ext4 defaults 0 2 But even manual mounting fails (I also have tried various "-t" options on the off chance!) sudo mount -t ext4 /dev/sdb1 /tera mount: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Output from disk queries indicate that it is a Linux LVM & a healthy disk still. sudo lshw -C disk *-disk:0 description: ATA Disk product: WDC WD5000AACS-0 vendor: Western Digital physical id: 0 bus info: scsi@2:0.0.0 logical name: /dev/sda version: 01.0 serial: WD-WCASU1401098 size: 465GiB (500GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=00015a55 *-disk:1 description: ATA Disk product: WDC WD10EADS-00L vendor: Western Digital physical id: 1 bus info: scsi@3:0.0.0 logical name: /dev/sdb version: 01.0 serial: WD-WCAU47836304 size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 sudo fdisk -l Disk /dev/sda: 500.1 GB, 500106780160 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976771055 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00015a55 Device Boot Start End Blocks Id System /dev/sda1 * 2048 972580863 486289408 83 Linux /dev/sda2 972582910 976769023 2093057 5 Extended /dev/sda5 972582912 976769023 2093056 82 Linux swap / Solaris Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 1953525167 976762583+ 8e Linux LVM LVM doesn't appear to be an option for mount or fstab. ... and here's a Smart data Screenshot from Disk Utility.

    Read the article

  • Thematic map contd.

    - by jsharma
    The previous post (creating a thematic map) described the use of an advanced style (color ranged-bucket style). The bucket style definition object has an attribute ('classification') which specifies the data classification scheme to use. It's values can be one of {'equal', 'quantile', 'logarithmic', 'custom'}. We use logarithmic in the previous example. Here we'll describe how to use a custom algorithm for classification. Specifically the Jenks Natural Breaks algorithm. We'll use the Javascript implementation in geostats.js The sample code above needs a few changes which are listed below. Include the geostats.js file after or before including oraclemapsv2.js <script src="geostats.js"></script> Modify the bucket style definition to use custom classification Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}    bucketStyleDef = {       numClasses : colorSeries[colorName].classes,       classification: 'custom', //'logarithmic',  // use a logarithmic scale       algorithm: jenksFromGeostats,       styles: theStyles,       gradient:  useGradient? 'linear' : 'off'     }; The function, which implements the custom classification scheme, is specified as the algorithm attribute value. It must accept two input parameters, an array of OM.feature and the name of the feature attribute (e.g. TOTPOP) to use in the classification, and must return an array of buckets (i.e. an array of or OM.style.Bucket  or OM.style.RangedBucket in this case). However the algorithm also needs to know the number of classes (i.e. the number of buckets to create). So we use a global to pass that info in. (Note: This bug/oversight will be fixed and the custom algorithm will be passed 3 parameters: the features array, attribute name, and number of classes). So createBucketColorStyle() has the following changes Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} var numClasses ; function createBucketColorStyle( colorName, colorSeries, rangeName, useGradient) {    var theBucketStyle;    var bucketStyleDef;    var theStyles = [];    //var numClasses ; numClasses = colorSeries[colorName].classes; ... and the function jenksFromGeostats is defined as Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} function jenksFromGeostats(featureArray, columnName) {    var items = [] ; // array of attribute values to be classified    $.each(featureArray, function(i, feature) {         items.push(parseFloat(feature.getAttributeValue(columnName)));    });    // create the geostats object    var theSeries = new geostats(items);    // call getJenks which returns an array of bounds    var theClasses = theSeries.getJenks(numClasses);    if(theClasses)    {     theClasses[theClasses.length-1]=parseFloat(theClasses[theClasses.length-1])+1;    }    else    {     alert(' empty result from getJenks');    }    var theBuckets = [], aBucket=null ;    for(var k=0; k<numClasses; k++)    {             aBucket = new OM.style.RangedBucket(             {low:parseFloat(theClasses[k]),               high:parseFloat(theClasses[k+1])             });             theBuckets.push(aBucket);     }     return theBuckets; } A screenshot of the resulting map with 5 classes is shown below. It is also possible to simply create the buckets and supply them when defining the Bucket style instead of specifying the function (algorithm). In that case the bucket style definition object would be    bucketStyleDef = {      numClasses : colorSeries[colorName].classes,      classification: 'custom',        buckets: theBuckets, //since we are supplying all the buckets      styles: theStyles,      gradient:  useGradient? 'linear' : 'off'    };

    Read the article

  • Using Python to traverse a parent-child data set

    - by user132748
    I have a dataset of two columns in a csv file. Th purpose of this dataset is to provide a linking between two different id's if they belong to the same person. e.g (2,3,5 belong to 1) e.g COLA COLB 1 2 ; 1 3 ; 1 5 ; 2 6 ; 3 7 ; 9 10 In the above example 1 is linked to 2,3,5 and 2 is the linked to 6 and 3 is linked to 7. What I am trying to achieve is to identify all records which are linked to 1 directly (2,3,5) or indirectly(6,7) and be able to say that these id's in column B belong to same person in column A and then either dedupe or add a new column to the output file which will have 1 populated for all rows that link to 1 e.g of expected output colA colB GroupField 1 2 1; 1 3 1; 1 5 1 ; 2 6 1 ;3 7 1; 9 10 9; 10 11 9 I am a newbie and so am not sure on how to approach this problem.Appreciate any inputs you'll can provide.

    Read the article

  • How to capture a Header or Trailer Count Value in a Flat File and Assign to a Variable

    - by Compudicted
    Recently I had several questions concerning how to process files that carry a header and trailer in them. Typically those files are a product of data extract from non Microsoft products e.g. Oracle database encompassing various tables data where every row starts with an identifier. For example such a file data record could look like: HDR,INTF_01,OUT,TEST,3/9/2011 11:23 B1,121156789,DATA TEST DATA,2011-03-09 10:00:00,Y,TEST 18 10:00:44,2011-07-18 10:00:44,Y B2,TEST DATA,2011-03-18 10:00:44,Y B3,LEG 1 TEST DATA,TRAN TEST,N B4,LEG 2 TEST DATA,TRAN TEST,Y FTR,4,TEST END,3/9/2011 11:27 A developer is normally able to break the records using a Conditional Split Transformation component by employing an expression similar to Output1 -- SUBSTRING(Output1,1,2) == "B1" and so on, but often a verification is required after this step to check if the number of data records read corresponds to the number specified in the trailer record of the file. This portion sometimes stumbles some people so I decided to share what I came up with. As an aside, I want to mention that the approach I use is slightly more portable than some others I saw because I use a separate DFT that can be copied and pasted into a new SSIS package designer surface or re-used within the same package again and it can survive several trailer/footer records (!). See how a ready DFT can look: The first step is to create a Flat File Connection Manager and make sure you get the row split into columns like this: After you are done with the Flat File connection, move onto adding an aggregate which is in use to simply assign a value to a variable (here the aggregate is used to handle the possibility of multiple footers/headers): The next step is adding a Script Transformation as destination that requires very little coding. First, some variable setup: and finally the code: As you can see it is important to place your code into the appropriate routine in the script, otherwise the end result may not be as expected. As the last step you would use the regular Script Component to compare the variable value obtained from the DFT above to a package variable value obtained say via a Row Count component to determine if the file being processed has the right number of rows.

    Read the article

  • Is individual code ownership important?

    - by Jim Puls
    I'm in the midst of an argument with some coworkers over whether team ownership of the entire codebase is better than individual ownership of components of it. I'm a huge proponent of assigning every member of the team a roughly equal share of the codebase. It lets people take pride in their creation, gives the bug screeners an obvious first place to assign incoming tickets, and helps to alleviate "broken window syndrome". It also concentrates knowledge of specific functionality with one (or two) team members making bug fixes much easier. Most of all, it puts the final say on major decisions with one person who has a lot of input instead of with a committee. I'm not advocating for requiring permission if somebody else wants to change your code; maybe have the code review always be to the owner, sure. Nor am I suggesting building knowledge silos: there should be nothing exclusive about this ownership. But when suggesting this to my coworkers, I got a ton of pushback, certainly much more than I expected. So I ask the community: what are your opinions on working with a team on a large codebase? Is there something I'm missing about vigilantly maintaining collective ownership?

    Read the article

  • Floating point undesireable in highly critical code?

    - by Kirt Undercoffer
    Question 11 in the Software Quality section of "IEEE Computer Society Real-World Software Engineering Problems", Naveda, Seidman, lists fp computation as undesirable because "the accuracy of the computations cannot be guaranteed". This is in the context of computing acceleration for an emergency braking system for a high speed train. This thinking seems to be invoking possible errors in small differences between measurements of a moving object but small differences at slow speeds aren't a problem (or shouldn't be), small differences between two measurements at high speed are irrelevant - can there be a problem with small roundoff errors during deceleration for an emergency braking system? This problem has been observed with airplane braking systems resulting in hydroplaning but could this actually happen in the context of a high speed train? The concern about fp errors seems to not be well-founded in this context. Any insight? The fp is used for acceleration so perhaps the concern is inching over a speed limit? But fp should be just fine if they use a double in whatever implementation language. The actual problem in the text states: During the inspection of the code for the emergency braking system of a new high speed train (a highly critical, real-time application), the review team identifies several characteristics of the code. Which of these characteristics are generally viewed as undesirable? The code contains three recursive functions (well that one is obvious). The computation of acceleration uses floating point arithmetic. All other computations use integer arithmetic. The code contains one linked list that uses dynamic memory allocation (second obvious problem). All inputs are checked to determine that they are within expected bounds before they are used.

    Read the article

  • Take care to unhook Anonymous Delegates

    - by David Vallens
    Anonymous delegates are great, they elimiante the need for lots of small classes that just pass values around, however care needs to be taken when using them, as they are not automatically unhooked when the function you created them in returns. In fact after it returns there is no way to unhook them. Consider the following code.   using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Diagnostics; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { SimpleEventSource t = new SimpleEventSource(); t.FireEvent(); FunctionWithAnonymousDelegate(t); t.FireEvent(); } private static void FunctionWithAnonymousDelegate(SimpleEventSource t) { t.MyEvent += delegate(object sender, EventArgs args) { Debug.WriteLine("Anonymous delegate called"); }; t.FireEvent(); } } public class SimpleEventSource { public event EventHandler MyEvent; public void FireEvent() { if (MyEvent == null) { Debug.WriteLine("Attempting to fire event - but no ones listening"); } else { Debug.WriteLine("Firing event"); MyEvent(this, EventArgs.Empty); } } } } If you expected the anonymous delegates do die with the function that created it then you would expect the output Attempting to fire event - but no ones listeningFiring eventAnonymous delegate calledAttempting to fire event - but no ones listening However what you actually get is Attempting to fire event - but no ones listeningFiring eventAnonymous delegate calledFiring eventAnonymous delegate called In my example the issue is just slowing things down, but if your delegate modifies objects, then you could end up with dificult to diagnose bugs. A solution to this problem is to unhook the delegate within the function var myDelegate = delegate(){Console.WriteLine("I did it!");}; MyEvent += myDelegate; // .... later MyEvent -= myDelegate;

    Read the article

  • Oracle Virtual Desktop Client with USB smart card reader

    - by wim.coekaerts
    I have my Sun Ray thin client at home which I use religiously, I use a Sun Ray 3i at work as my main desktop and just always take my smart card home and happily continue with the hot desking feature. We released a software version of the Sun Ray client called Oracle Virtual Desktop Client (OVDC). There is a version for Windows, Linux and Mac OS X. I have a minimac at home and I installed OVDC on it, which of course works great but since I like to re-connect to my session that I use at work, I wanted to try out the external usb smart card reader feature. I ordered a cute, low cost device online and tried it out. As expected, it worked out of the box without -any- configuration. I took the device, plugged it into my minimac, started OVDC, plugged in my smartcard and I got the password screen (screensaver) to get into my sun ray session on my server at work. Nothing new here, this is a feature that's been in the product but I had never tried it before and it works out of the box and is super easy and I just felt like sharing :-) Here are a few pictures : (1) login screen (2) smart cardreader without card (3) password screen (4) smart card reader with card

    Read the article

  • Screencast not producing files

    - by JohnS
    I'm using Gnome 3 on 12.04 and trying to create a screencast. I start the screencast using the Ctrl-Alt-Shift-R shortcut and the red light appears in the bottom right corner. I go about my business then press the key combination again when done. The problem is that the screencast file gets generated maybe 1 out of 10 times. Is there a log file I can look at to determine the issue? How about a settings file? UPDATE: I did some additional testing. What's happening is that the screencast does work but it appends the new video to the existing file. Even if the file is renamed or moved to trash. Emptying trash does not create a new file either. Not sure where the video gets recorded to then. The only reliable way I've found to have a new file created is to log out of the session and log back in. Is this expected behaviour? Is there a way to force screencast to create a new file every time Ctrl-Alt-Shift-R is pressed?

    Read the article

  • Groovy Refactoring in NetBeans

    - by Martin Janicek
    Hi guys, during the NetBeans 7.3 feature development, I spend quite a lot of time trying to get some basic Groovy refactoring to the game. I've implemented find usages and rename refactoring for some basic constructs (class types, fields, properties, variables and methods). It's certainly not perfect and it will definitely need a lot fixes and improvements to get it hundred percent reliable, but I need to start somehow :) I would like to ask all of you to test it as much as possible and file a new tickets to the cases where it doesn't work as expected (e.g. some occurrences which should be in usages isn't there etc.) ..it's really important for me because I don't have real Groovy project and thus I can test only some simple cases. I can promise, that with your help we can make it really useful for the next release. Also please be aware that the current version is focusing only on the .groovy files. That means it won't find any usages from the .java files (and the same applies for finding usages from java files - it won't find any groovy usages). I know it's not ideal, but as I said.. we have to start somehow and it wasn't possible to make it all-in-one, so only other option was to wait for the NetBeans 7.4. I'll focus on better Java-Groovy integration in the next release (not only in refactoring, but also in navigation, code completion etc.) BTW: I've created a new component with surprising name "Refactoring" in our bugzilla[1], so please put the reported issues into this category. [1] http://netbeans.org/bugzilla/buglist.cgi?product=groovy;component=Refactoring

    Read the article

  • Windows Phone 7 : Dragging and flicking UI controls

    - by TechTwaddle
    Who would want to flick and drag UI controls!? There might not be many use cases but I think some concepts here are worthy of a post. So we will create a simple silverlight application for windows phone 7, containing a canvas element on which we’ll place a button control and an image and then, as the title says, drag and flick the controls. Here’s Mainpage.xaml, <Grid x:Name="LayoutRoot" Background="Transparent">   <Grid.RowDefinitions>     <RowDefinition Height="Auto"/>     <RowDefinition Height="*"/>   </Grid.RowDefinitions>     <!--TitlePanel contains the name of the application and page title-->   <StackPanel x:Name="TitlePanel" Grid.Row="0" Margin="12,17,0,28">     <TextBlock x:Name="ApplicationTitle" Text="KINETICS" Style="{StaticResource PhoneTextNormalStyle}"/>     <TextBlock x:Name="PageTitle" Text="drag and flick" Margin="9,-7,0,0" Style="{StaticResource PhoneTextTitle1Style}"/>   </StackPanel>     <!--ContentPanel - place additional content here-->   <Grid x:Name="ContentPanel" Grid.Row="1" >     <Canvas x:Name="MainCanvas" HorizontalAlignment="Stretch" VerticalAlignment="Stretch">       <Canvas.Background>         <LinearGradientBrush StartPoint="0 0" EndPoint="0 1">           <GradientStop Offset="0" Color="Black"/>           <GradientStop Offset="1.5" Color="BlanchedAlmond"/>         </LinearGradientBrush>       </Canvas.Background>     </Canvas>   </Grid> </Grid> the second row in the main grid contains a canvas element, MainCanvas, with its horizontal and vertical alignment set to stretch so that it occupies the entire grid. The canvas background is a linear gradient brush starting with Black and ending with BlanchedAlmond. We’ll add the button and image control to this canvas at run time. Moving to Mainpage.xaml.cs the Mainpage class contains the following members, public partial class MainPage : PhoneApplicationPage {     Button FlickButton;     Image FlickImage;       FrameworkElement ElemToMove = null;     double ElemVelX, ElemVelY;       const double SPEED_FACTOR = 60;       DispatcherTimer timer; FlickButton and FlickImage are the controls that we’ll add to the canvas. ElemToMove, ElemVelX and ElemVelY will be used by the timer callback to move the ui control. SPEED_FACTOR is used to scale the velocities of ui controls. Here’s the Mainpage constructor, // Constructor public MainPage() {     InitializeComponent();       AddButtonToCanvas();       AddImageToCanvas();       timer = new DispatcherTimer();     timer.Interval = TimeSpan.FromMilliseconds(35);     timer.Tick += new EventHandler(OnTimerTick); } We’ll look at those AddButton and AddImage functions in a moment. The constructor initializes a timer which fires every 35 milliseconds, this timer will be started after the flick gesture completes with some inertia. Back to AddButton and AddImage functions, void AddButtonToCanvas() {     LinearGradientBrush brush;     GradientStop stop1, stop2;       Random rand = new Random(DateTime.Now.Millisecond);       FlickButton = new Button();     FlickButton.Content = "";     FlickButton.Width = 100;     FlickButton.Height = 100;       brush = new LinearGradientBrush();     brush.StartPoint = new Point(0, 0);     brush.EndPoint = new Point(0, 1);       stop1 = new GradientStop();     stop1.Offset = 0;     stop1.Color = Colors.White;       stop2 = new GradientStop();     stop2.Offset = 1;     stop2.Color = (Application.Current.Resources["PhoneAccentBrush"] as SolidColorBrush).Color;       brush.GradientStops.Add(stop1);     brush.GradientStops.Add(stop2);       FlickButton.Background = brush;       Canvas.SetTop(FlickButton, rand.Next(0, 400));     Canvas.SetLeft(FlickButton, rand.Next(0, 200));       MainCanvas.Children.Add(FlickButton);       //subscribe to events     FlickButton.ManipulationDelta += new EventHandler<ManipulationDeltaEventArgs>(OnManipulationDelta);     FlickButton.ManipulationCompleted += new EventHandler<ManipulationCompletedEventArgs>(OnManipulationCompleted); } this function is basically glorifying a simple task. After creating the button and setting its height and width, its background is set to a linear gradient brush. The direction of the gradient is from top towards bottom and notice that the second stop color is the PhoneAccentColor, which changes along with the theme of the device. The line,     stop2.Color = (Application.Current.Resources["PhoneAccentBrush"] as SolidColorBrush).Color; does the magic of extracting the PhoneAccentBrush from application’s resources, getting its color and assigning it to the gradient stop. AddImage function is straight forward in comparison, void AddImageToCanvas() {     Random rand = new Random(DateTime.Now.Millisecond);       FlickImage = new Image();     FlickImage.Source = new BitmapImage(new Uri("/images/Marble.png", UriKind.Relative));       Canvas.SetTop(FlickImage, rand.Next(0, 400));     Canvas.SetLeft(FlickImage, rand.Next(0, 200));       MainCanvas.Children.Add(FlickImage);       //subscribe to events     FlickImage.ManipulationDelta += new EventHandler<ManipulationDeltaEventArgs>(OnManipulationDelta);     FlickImage.ManipulationCompleted += new EventHandler<ManipulationCompletedEventArgs>(OnManipulationCompleted); } The ManipulationDelta and ManipulationCompleted handlers are same for both the button and the image. OnManipulationDelta() should look familiar, a similar implementation was used in the previous post, void OnManipulationDelta(object sender, ManipulationDeltaEventArgs args) {     FrameworkElement Elem = sender as FrameworkElement;       double Left = Canvas.GetLeft(Elem);     double Top = Canvas.GetTop(Elem);       Left += args.DeltaManipulation.Translation.X;     Top += args.DeltaManipulation.Translation.Y;       //check for bounds     if (Left < 0)     {         Left = 0;     }     else if (Left > (MainCanvas.ActualWidth - Elem.ActualWidth))     {         Left = MainCanvas.ActualWidth - Elem.ActualWidth;     }       if (Top < 0)     {         Top = 0;     }     else if (Top > (MainCanvas.ActualHeight - Elem.ActualHeight))     {         Top = MainCanvas.ActualHeight - Elem.ActualHeight;     }       Canvas.SetLeft(Elem, Left);     Canvas.SetTop(Elem, Top); } all it does is calculate the control’s position, check for bounds and then set the top and left of the control. OnManipulationCompleted() is more interesting because here we need to check if the gesture completed with any inertia and if it did, start the timer and continue to move the ui control until it comes to a halt slowly, void OnManipulationCompleted(object sender, ManipulationCompletedEventArgs args) {     FrameworkElement Elem = sender as FrameworkElement;       if (args.IsInertial)     {         ElemToMove = Elem;           Debug.WriteLine("Linear VelX:{0:0.00}  VelY:{1:0.00}", args.FinalVelocities.LinearVelocity.X,             args.FinalVelocities.LinearVelocity.Y);           ElemVelX = args.FinalVelocities.LinearVelocity.X / SPEED_FACTOR;         ElemVelY = args.FinalVelocities.LinearVelocity.Y / SPEED_FACTOR;           timer.Start();     } } ManipulationCompletedEventArgs contains a member, IsInertial, which is set to true if the manipulation was completed with some inertia. args.FinalVelocities.LinearVelocity.X and .Y will contain the velocities along the X and Y axis. We need to scale down these values so they can be used to increment the ui control’s position sensibly. A reference to the ui control is stored in ElemToMove and the velocities are stored as well, these will be used in the timer callback to access the ui control. And finally, we start the timer. The timer callback function is as follows, void OnTimerTick(object sender, EventArgs e) {     if (null != ElemToMove)     {         double Left, Top;         Left = Canvas.GetLeft(ElemToMove);         Top = Canvas.GetTop(ElemToMove);           Left += ElemVelX;         Top += ElemVelY;           //check for bounds         if (Left < 0)         {             Left = 0;             ElemVelX *= -1;         }         else if (Left > (MainCanvas.ActualWidth - ElemToMove.ActualWidth))         {             Left = MainCanvas.ActualWidth - ElemToMove.ActualWidth;             ElemVelX *= -1;         }           if (Top < 0)         {             Top = 0;             ElemVelY *= -1;         }         else if (Top > (MainCanvas.ActualHeight - ElemToMove.ActualHeight))         {             Top = MainCanvas.ActualHeight - ElemToMove.ActualHeight;             ElemVelY *= -1;         }           Canvas.SetLeft(ElemToMove, Left);         Canvas.SetTop(ElemToMove, Top);           //reduce x,y velocities gradually         ElemVelX *= 0.9;         ElemVelY *= 0.9;           //when velocities become too low, break         if (Math.Abs(ElemVelX) < 1.0 && Math.Abs(ElemVelY) < 1.0)         {             timer.Stop();             ElemToMove = null;         }     } } if ElemToMove is not null, we get the top and left values of the control and increment the values with their X and Y velocities. Check for bounds, and if the control goes out of bounds we reverse its velocity. Towards the end, the velocities are reduced by 10% every time the timer callback is called, and if the velocities reach too low values the timer is stopped and ElemToMove is made null. Here’s a short video of the program, the video is a little dodgy because my display driver refuses to run the animations smoothly. The flicks aren’t always recognised but the program should run well on an actual device (or a pc with better configuration), You can download the source code from here: ButtonDragAndFlick.zip

    Read the article

  • cpnfigure open_basedir under Plesk

    - by cori
    This might be a question for ServerFault, and f it wasn't for the Plesk aspect I would ask it there to start with, so if it's better suited for over there let me know and I'll move it. I'm working on a dedicated server set up as a reseller account with Plesk to manage the domains and server configuration, and i need to add a directory to the local open_basedir configuration for a specific vhost. Given Plesk's normal methodology, I expected to be able to go to /var/www/vhost/{%DOMAINNAME%}/conf and modify vhost.conf and place a new value there, as I have successfully done with other configuration settings for this domain (turning safe_mode off, for instance). When I do so, however, the new setting doesn't take (per phpinfo();). If I edit httpd.conf (which the plesk configuration specifically says not to do in the notes at the top of httpd.conf) the setting takes. Is there something specific about the open_basdir setting that makes it not configurable in vhost.conf? How much trouble am I letting myself in for by editing the vhost-specific httpd.conf (I imagine is someone makes changes in the plesk web interface it might be overwritten, but what other risk is there)? Thanks!

    Read the article

  • Payroll Customers Must Apply Mandatory Patches to Maintain Your Supportability

    - by DanaD
    The HRMS Suite of products has minimum required Rollup Patch (RUP) levels as well as additional mandatory patches that our customers must apply to ensure they are in compliance for support.  Without these patches, customers risk not being able to apply any fixes for issues they encounter as these RUPs and mandatory patches are the minimum patch level expected by Oracle Support and Oracle Development.  Core Payroll and International Payroll customers must apply the yearly Rollup Patch within 12 months of its issue. Legislative Payroll customers have additional requirements for the Rollup Patch, as the RUP generally is a pre-requisite for the next Year End/Fourth Quarter/Year Begin payroll processing supported by Oracle. These minimum RUP patches and other mandatory patches for your product or legislation are created with the following goals in mind: Compliance: Manage the people in your organization within the requirements of a specific country. Supportability: Ensure you are on a common code base so that if problems are identified, patches can be readily provided to you. Reliability: Reliable code with multiple customer downloads and comprehensive testing. For the listing of Mandatory Rollup Patches for Oracle Payroll please view: Doc ID 295406.1: Mandatory Family Pack/Rollup Patch (RUP) Levels for Oracle Payroll. For the listing of Mandatory Patches for the HRMS Suite please view: Doc ID 1160507.1: Oracle E-Business Suite HCM Information Center – Consolidated HRMS Mandatory Patch List. For information on the latest Rollup Patches (RUPs) for the HRMS Suite please view: Doc ID 135266.1: Oracle HRMS Product Family – Release 11i & 12 Information.

    Read the article

< Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >