Search Results

Search found 10312 results on 413 pages for 'compiler bug'.

Page 309/413 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • No updates in my Raring

    - by zatloukal-frantisek
    Since upgrade from Quantal to raring i am not recieving any updates. For example firefox package - I have version 17 installed and apt-get update && apt-get upgrade does not find updates. And output from show-versions: fanys@fanys-netbook:~$ apt-show-versions firefox firefox 17.0+build2-0ubuntu0.12.10.1 newer than version in archive fanys@fanys-netbook:~$ apt-show-versions unity unity/raring uptodate 6.12.0-0ubuntu1 I tried to remove contents of /var/lib/apt/lists/ and redo package refresh(apt-get update). But still same issue. /etc/apt/sources.list contents: # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://cz.archive.ubuntu.com/ubuntu/ raring main restricted deb-src http://cz.archive.ubuntu.com/ubuntu/ raring main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://cz.archive.ubuntu.com/ubuntu/ raring-updates main restricted deb-src http://cz.archive.ubuntu.com/ubuntu/ raring-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://cz.archive.ubuntu.com/ubuntu/ raring universe deb-src http://cz.archive.ubuntu.com/ubuntu/ raring universe deb http://cz.archive.ubuntu.com/ubuntu/ raring-updates universe deb-src http://cz.archive.ubuntu.com/ubuntu/ raring-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://cz.archive.ubuntu.com/ubuntu/ raring multiverse deb-src http://cz.archive.ubuntu.com/ubuntu/ raring multiverse deb http://cz.archive.ubuntu.com/ubuntu/ raring-updates multiverse deb-src http://cz.archive.ubuntu.com/ubuntu/ raring-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://security.ubuntu.com/ubuntu raring-security main restricted deb-src http://security.ubuntu.com/ubuntu raring-security main restricted deb http://security.ubuntu.com/ubuntu raring-security universe deb-src http://security.ubuntu.com/ubuntu raring-security universe deb http://security.ubuntu.com/ubuntu raring-security multiverse deb-src http://security.ubuntu.com/ubuntu raring-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu raring partner deb-src http://archive.canonical.com/ubuntu raring partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu raring main deb-src http://extras.ubuntu.com/ubuntu raring main deb http://cz.archive.ubuntu.com/ubuntu/ raring-proposed main universe restricted multiverse deb http://cz.archive.ubuntu.com/ubuntu/ raring-backports main universe restricted multiverse I have no updates for 4 days of dist-upgrade. There is one package kept in actual version: libexttextcat-data Thanks in advance

    Read the article

  • PTLQueue : a scalable bounded-capacity MPMC queue

    - by Dave
    Title: Fast concurrent MPMC queue -- I've used the following concurrent queue algorithm enough that it warrants a blog entry. I'll sketch out the design of a fast and scalable multiple-producer multiple-consumer (MPSC) concurrent queue called PTLQueue. The queue has bounded capacity and is implemented via a circular array. Bounded capacity can be a useful property if there's a mismatch between producer rates and consumer rates where an unbounded queue might otherwise result in excessive memory consumption by virtue of the container nodes that -- in some queue implementations -- are used to hold values. A bounded-capacity queue can provide flow control between components. Beware, however, that bounded collections can also result in resource deadlock if abused. The put() and take() operators are partial and wait for the collection to become non-full or non-empty, respectively. Put() and take() do not allocate memory, and are not vulnerable to the ABA pathologies. The PTLQueue algorithm can be implemented equally well in C/C++ and Java. Partial operators are often more convenient than total methods. In many use cases if the preconditions aren't met, there's nothing else useful the thread can do, so it may as well wait via a partial method. An exception is in the case of work-stealing queues where a thief might scan a set of queues from which it could potentially steal. Total methods return ASAP with a success-failure indication. (It's tempting to describe a queue or API as blocking or non-blocking instead of partial or total, but non-blocking is already an overloaded concurrency term. Perhaps waiting/non-waiting or patient/impatient might be better terms). It's also trivial to construct partial operators by busy-waiting via total operators, but such constructs may be less efficient than an operator explicitly and intentionally designed to wait. A PTLQueue instance contains an array of slots, where each slot has volatile Turn and MailBox fields. The array has power-of-two length allowing mod/div operations to be replaced by masking. We assume sensible padding and alignment to reduce the impact of false sharing. (On x86 I recommend 128-byte alignment and padding because of the adjacent-sector prefetch facility). Each queue also has PutCursor and TakeCursor cursor variables, each of which should be sequestered as the sole occupant of a cache line or sector. You can opt to use 64-bit integers if concerned about wrap-around aliasing in the cursor variables. Put(null) is considered illegal, but the caller or implementation can easily check for and convert null to a distinguished non-null proxy value if null happens to be a value you'd like to pass. Take() will accordingly convert the proxy value back to null. An advantage of PTLQueue is that you can use atomic fetch-and-increment for the partial methods. We initialize each slot at index I with (Turn=I, MailBox=null). Both cursors are initially 0. All shared variables are considered "volatile" and atomics such as CAS and AtomicFetchAndIncrement are presumed to have bidirectional fence semantics. Finally T is the templated type. I've sketched out a total tryTake() method below that allows the caller to poll the queue. tryPut() has an analogous construction. Zebra stripping : alternating row colors for nice-looking code listings. See also google code "prettify" : https://code.google.com/p/google-code-prettify/ Prettify is a javascript module that yields the HTML/CSS/JS equivalent of pretty-print. -- pre:nth-child(odd) { background-color:#ff0000; } pre:nth-child(even) { background-color:#0000ff; } border-left: 11px solid #ccc; margin: 1.7em 0 1.7em 0.3em; background-color:#BFB; font-size:12px; line-height:65%; " // PTLQueue : Put(v) : // producer : partial method - waits as necessary assert v != null assert Mask = 1 && (Mask & (Mask+1)) == 0 // Document invariants // doorway step // Obtain a sequence number -- ticket // As a practical concern the ticket value is temporally unique // The ticket also identifies and selects a slot auto tkt = AtomicFetchIncrement (&PutCursor, 1) slot * s = &Slots[tkt & Mask] // waiting phase : // wait for slot's generation to match the tkt value assigned to this put() invocation. // The "generation" is implicitly encoded as the upper bits in the cursor // above those used to specify the index : tkt div (Mask+1) // The generation serves as an epoch number to identify a cohort of threads // accessing disjoint slots while s-Turn != tkt : Pause assert s-MailBox == null s-MailBox = v // deposit and pass message Take() : // consumer : partial method - waits as necessary auto tkt = AtomicFetchIncrement (&TakeCursor,1) slot * s = &Slots[tkt & Mask] // 2-stage waiting : // First wait for turn for our generation // Acquire exclusive "take" access to slot's MailBox field // Then wait for the slot to become occupied while s-Turn != tkt : Pause // Concurrency in this section of code is now reduced to just 1 producer thread // vs 1 consumer thread. // For a given queue and slot, there will be most one Take() operation running // in this section. // Consumer waits for producer to arrive and make slot non-empty // Extract message; clear mailbox; advance Turn indicator // We have an obvious happens-before relation : // Put(m) happens-before corresponding Take() that returns that same "m" for T v = s-MailBox if v != null : s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 // unlock slot to admit next producer and consumer return v Pause tryTake() : // total method - returns ASAP with failure indication for auto tkt = TakeCursor slot * s = &Slots[tkt & Mask] if s-Turn != tkt : return null T v = s-MailBox // presumptive return value if v == null : return null // ratify tkt and v values and commit by advancing cursor if CAS (&TakeCursor, tkt, tkt+1) != tkt : continue s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 return v The basic idea derives from the Partitioned Ticket Lock "PTL" (US20120240126-A1) and the MultiLane Concurrent Bag (US8689237). The latter is essentially a circular ring-buffer where the elements themselves are queues or concurrent collections. You can think of the PTLQueue as a partitioned ticket lock "PTL" augmented to pass values from lock to unlock via the slots. Alternatively, you could conceptualize of PTLQueue as a degenerate MultiLane bag where each slot or "lane" consists of a simple single-word MailBox instead of a general queue. Each lane in PTLQueue also has a private Turn field which acts like the Turn (Grant) variables found in PTL. Turn enforces strict FIFO ordering and restricts concurrency on the slot mailbox field to at most one simultaneous put() and take() operation. PTL uses a single "ticket" variable and per-slot Turn (grant) fields while MultiLane has distinct PutCursor and TakeCursor cursors and abstract per-slot sub-queues. Both PTL and MultiLane advance their cursor and ticket variables with atomic fetch-and-increment. PTLQueue borrows from both PTL and MultiLane and has distinct put and take cursors and per-slot Turn fields. Instead of a per-slot queues, PTLQueue uses a simple single-word MailBox field. PutCursor and TakeCursor act like a pair of ticket locks, conferring "put" and "take" access to a given slot. PutCursor, for instance, assigns an incoming put() request to a slot and serves as a PTL "Ticket" to acquire "put" permission to that slot's MailBox field. To better explain the operation of PTLQueue we deconstruct the operation of put() and take() as follows. Put() first increments PutCursor obtaining a new unique ticket. That ticket value also identifies a slot. Put() next waits for that slot's Turn field to match that ticket value. This is tantamount to using a PTL to acquire "put" permission on the slot's MailBox field. Finally, having obtained exclusive "put" permission on the slot, put() stores the message value into the slot's MailBox. Take() similarly advances TakeCursor, identifying a slot, and then acquires and secures "take" permission on a slot by waiting for Turn. Take() then waits for the slot's MailBox to become non-empty, extracts the message, and clears MailBox. Finally, take() advances the slot's Turn field, which releases both "put" and "take" access to the slot's MailBox. Note the asymmetry : put() acquires "put" access to the slot, but take() releases that lock. At any given time, for a given slot in a PTLQueue, at most one thread has "put" access and at most one thread has "take" access. This restricts concurrency from general MPMC to 1-vs-1. We have 2 ticket locks -- one for put() and one for take() -- each with its own "ticket" variable in the form of the corresponding cursor, but they share a single "Grant" egress variable in the form of the slot's Turn variable. Advancing the PutCursor, for instance, serves two purposes. First, we obtain a unique ticket which identifies a slot. Second, incrementing the cursor is the doorway protocol step to acquire the per-slot mutual exclusion "put" lock. The cursors and operations to increment those cursors serve double-duty : slot-selection and ticket assignment for locking the slot's MailBox field. At any given time a slot MailBox field can be in one of the following states: empty with no pending operations -- neutral state; empty with one or more waiting take() operations pending -- deficit; occupied with no pending operations; occupied with one or more waiting put() operations -- surplus; empty with a pending put() or pending put() and take() operations -- transitional; or occupied with a pending take() or pending put() and take() operations -- transitional. The partial put() and take() operators can be implemented with an atomic fetch-and-increment operation, which may confer a performance advantage over a CAS-based loop. In addition we have independent PutCursor and TakeCursor cursors. Critically, a put() operation modifies PutCursor but does not access the TakeCursor and a take() operation modifies the TakeCursor cursor but does not access the PutCursor. This acts to reduce coherence traffic relative to some other queue designs. It's worth noting that slow threads or obstruction in one slot (or "lane") does not impede or obstruct operations in other slots -- this gives us some degree of obstruction isolation. PTLQueue is not lock-free, however. The implementation above is expressed with polite busy-waiting (Pause) but it's trivial to implement per-slot parking and unparking to deschedule waiting threads. It's also easy to convert the queue to a more general deque by replacing the PutCursor and TakeCursor cursors with Left/Front and Right/Back cursors that can move either direction. Specifically, to push and pop from the "left" side of the deque we would decrement and increment the Left cursor, respectively, and to push and pop from the "right" side of the deque we would increment and decrement the Right cursor, respectively. We used a variation of PTLQueue for message passing in our recent OPODIS 2013 paper. ul { list-style:none; padding-left:0; padding:0; margin:0; margin-left:0; } ul#myTagID { padding: 0px; margin: 0px; list-style:none; margin-left:0;} -- -- There's quite a bit of related literature in this area. I'll call out a few relevant references: Wilson's NYU Courant Institute UltraComputer dissertation from 1988 is classic and the canonical starting point : Operating System Data Structures for Shared-Memory MIMD Machines with Fetch-and-Add. Regarding provenance and priority, I think PTLQueue or queues effectively equivalent to PTLQueue have been independently rediscovered a number of times. See CB-Queue and BNPBV, below, for instance. But Wilson's dissertation anticipates the basic idea and seems to predate all the others. Gottlieb et al : Basic Techniques for the Efficient Coordination of Very Large Numbers of Cooperating Sequential Processors Orozco et al : CB-Queue in Toward high-throughput algorithms on many-core architectures which appeared in TACO 2012. Meneghin et al : BNPVB family in Performance evaluation of inter-thread communication mechanisms on multicore/multithreaded architecture Dmitry Vyukov : bounded MPMC queue (highly recommended) Alex Otenko : US8607249 (highly related). John Mellor-Crummey : Concurrent queues: Practical fetch-and-phi algorithms. Technical Report 229, Department of Computer Science, University of Rochester Thomasson : FIFO Distributed Bakery Algorithm (very similar to PTLQueue). Scott and Scherer : Dual Data Structures I'll propose an optimization left as an exercise for the reader. Say we wanted to reduce memory usage by eliminating inter-slot padding. Such padding is usually "dark" memory and otherwise unused and wasted. But eliminating the padding leaves us at risk of increased false sharing. Furthermore lets say it was usually the case that the PutCursor and TakeCursor were numerically close to each other. (That's true in some use cases). We might still reduce false sharing by incrementing the cursors by some value other than 1 that is not trivially small and is coprime with the number of slots. Alternatively, we might increment the cursor by one and mask as usual, resulting in a logical index. We then use that logical index value to index into a permutation table, yielding an effective index for use in the slot array. The permutation table would be constructed so that nearby logical indices would map to more distant effective indices. (Open question: what should that permutation look like? Possibly some perversion of a Gray code or De Bruijn sequence might be suitable). As an aside, say we need to busy-wait for some condition as follows : "while C == 0 : Pause". Lets say that C is usually non-zero, so we typically don't wait. But when C happens to be 0 we'll have to spin for some period, possibly brief. We can arrange for the code to be more machine-friendly with respect to the branch predictors by transforming the loop into : "if C == 0 : for { Pause; if C != 0 : break; }". Critically, we want to restructure the loop so there's one branch that controls entry and another that controls loop exit. A concern is that your compiler or JIT might be clever enough to transform this back to "while C == 0 : Pause". You can sometimes avoid this by inserting a call to a some type of very cheap "opaque" method that the compiler can't elide or reorder. On Solaris, for instance, you could use :"if C == 0 : { gethrtime(); for { Pause; if C != 0 : break; }}". It's worth noting the obvious duality between locks and queues. If you have strict FIFO lock implementation with local spinning and succession by direct handoff such as MCS or CLH,then you can usually transform that lock into a queue. Hidden commentary and annotations - invisible : * And of course there's a well-known duality between queues and locks, but I'll leave that topic for another blog post. * Compare and contrast : PTLQ vs PTL and MultiLane * Equivalent : Turn; seq; sequence; pos; position; ticket * Put = Lock; Deposit Take = identify and reserve slot; wait; extract & clear; unlock * conceptualize : Distinct PutLock and TakeLock implemented as ticket lock or PTL Distinct arrival cursors but share per-slot "Turn" variable provides exclusive role-based access to slot's mailbox field put() acquires exclusive access to a slot for purposes of "deposit" assigns slot round-robin and then acquires deposit access rights/perms to that slot take() acquires exclusive access to slot for purposes of "withdrawal" assigns slot round-robin and then acquires withdrawal access rights/perms to that slot At any given time, only one thread can have withdrawal access to a slot at any given time, only one thread can have deposit access to a slot Permissible for T1 to have deposit access and T2 to simultaneously have withdrawal access * round-robin for the purposes of; role-based; access mode; access role mailslot; mailbox; allocate/assign/identify slot rights; permission; license; access permission; * PTL/Ticket hybrid Asymmetric usage ; owner oblivious lock-unlock pairing K-exclusion add Grant cursor pass message m from lock to unlock via Slots[] array Cursor performs 2 functions : + PTL ticket + Assigns request to slot in round-robin fashion Deconstruct protocol : explication put() : allocate slot in round-robin fashion acquire PTL for "put" access store message into slot associated with PTL index take() : Acquire PTL for "take" access // doorway step seq = fetchAdd (&Grant, 1) s = &Slots[seq & Mask] // waiting phase while s-Turn != seq : pause Extract : wait for s-mailbox to be full v = s-mailbox s-mailbox = null Release PTL for both "put" and "take" access s-Turn = seq + Mask + 1 * Slot round-robin assignment and lock "doorway" protocol leverage the same cursor and FetchAdd operation on that cursor FetchAdd (&Cursor,1) + round-robin slot assignment and dispersal + PTL/ticket lock "doorway" step waiting phase is via "Turn" field in slot * PTLQueue uses 2 cursors -- put and take. Acquire "put" access to slot via PTL-like lock Acquire "take" access to slot via PTL-like lock 2 locks : put and take -- at most one thread can access slot's mailbox Both locks use same "turn" field Like multilane : 2 cursors : put and take slot is simple 1-capacity mailbox instead of queue Borrow per-slot turn/grant from PTL Provides strict FIFO Lock slot : put-vs-put take-vs-take at most one put accesses slot at any one time at most one put accesses take at any one time reduction to 1-vs-1 instead of N-vs-M concurrency Per slot locks for put/take Release put/take by advancing turn * is instrumental in ... * P-V Semaphore vs lock vs K-exclusion * See also : FastQueues-excerpt.java dice-etc/queue-mpmc-bounded-blocking-circular-xadd/ * PTLQueue is the same as PTLQB - identical * Expedient return; ASAP; prompt; immediately * Lamport's Bakery algorithm : doorway step then waiting phase Threads arriving at doorway obtain a unique ticket number Threads enter in ticket order * In the terminology of Reed and Kanodia a ticket lock corresponds to the busy-wait implementation of a semaphore using an eventcount and a sequencer It can also be thought of as an optimization of Lamport's bakery lock was designed for fault-tolerance rather than performance Instead of spinning on the release counter, processors using a bakery lock repeatedly examine the tickets of their peers --

    Read the article

  • Does using GCC specific builtins qualify as incorporation within a project?

    - by DavidJFelix
    I understand that linking to a program licensed under the GPL requires that you release the source of your program under the GPL as well, while the LGPL does not require this. The terminology of the (L)GPL is very clear about this. #include "gpl_program.h" means you'd have to license GPL, because you're linking to GPL licensed code. And #include "lgpl_program.h" means you're free to license however you want, so that it doesn't explicitly prohibit linking to LGPL source. Now, my question about what isn't clear is: [begin question] GCC is GPL licensed, compiling with GCC, does not constitute "integration" into your program, as the GPL puts it; does using builtin functions (which are specific to GCC) constitute "incorporation" even though you haven't explicitly linked to this GPL licensed code? My intuition tells me that this isn't the intention, but legality isn't always intuitive. I'm not actually worried, but I'm curious if this could be considered the case. [end question] [begin aside] The reason for my equivocation is that GCC builtins like __builtin_clzl() or __builtin_expect() are an API technically and could be implemented in another way. For example, many builtins were replicated by LLVM and the argument could be made that it's not implementation specific to GCC. However, many builtins have no parallel and when compiled will link GPL licensed code in GCC and will not compile on other compilers. If you make the argument here that the API could be replicated by another compiler, couldn't you make that identical claim about any program you link to, so long as you don't distribute that source? I understand that I'm being a legal snake about this, but it strikes me as odd that the GPL isn't more specific. I don't see this as a reasonable ploy for proprietary software creators to bypass the GPL, as they'd have to bundle the GPL software to make it work, removing their plausible deniability. However, isn't it possible that if builtins don't constitute linking, then open source proponents who oppose the GPL could simply write a BSD/MIT/Apache/Apple licensed product that links to a GPL'd program and claim that they intend to write a non-GPL interface that is identical to the GPL one, preserving their BSD license until it's actually compiled? [end aside] Sorry for the aside, I didn't think many people would follow why I care about this if I'm not facing any legal trouble or implications. Don't worry too much about the hypotheticals there, I'm just extrapolating what either answer to my actual question could imply.

    Read the article

  • Cross-platform independent development

    - by Joe Wreschnig
    Some years ago, if you wrote in C and some subset of C++ and used a sufficient number of platform abstractions (via SDL or whatever), you could run on every platform an indie could get on - Linux, Windows, Mac OS of various versions, obscure stuff like BeOS, and the open consoles like the GP2X and post-death Dreamcast. If you got a contract for a closed platform at some point, you could port your game to that platform with "minimal" code changes as well. Today, indie developers must use XNA to get on the Xbox 360 (and upcoming Windows phone); must not use XNA to work anywhere else but Windows; until recently had to use Java on Android; Flash doesn't run on phones, HTML5 doesn't work on IE. Unlike e.g. DirectX vs. OpenGL or Windows vs. Unix, these are changes to the core language you write your code in and can't be papered over without, basically, writing a compiler. You can move some game logic into scripts and include an interpreter - except when you can't, because the iPhone SDK doesn't allow it, and performance suffers because no one allows JIT. So what can you do if you want a really cross-platform portable game, or even just a significant body of engine and logic code? Is this not a problem because the platforms have fundamentally diverged - it's just plain not worthwhile to try to target both an iPhone and the Xbox 360 with any shared code because such a game would be bad? (I find this very unlikely. I can easily see wanting to share a game between a Windows Mobile phone and an Android, or an Xbox 360 and an iPad.) Are interfaces so high-level now that porting time is negligible? (I might believe this for business applications, but not for games with strict performance requirements.) Is this going to become more pronounced in the future? Is the split going to be, somewhat scarily, still down vendor lines? Will we all rely on high-level middleware like Flash or Unity to get anything cross-platform done? tl;dr - Is porting a problem, is it going to be a bigger problem in the future, and if so how do we solve it?

    Read the article

  • New security configuration flag in UCM PS3

    - by kyle.hatlestad
    While the recent Patch Set 3 (PS3) release was mostly focused on bug fixes and such, a new configuration flag was added for security. In 10gR3 and prior versions, UCM had a component called Collaboration Manager which allowed for project folders to be created and groups of users assigned as members to collaborate on documents. With this component came access control lists (ACL) for content and folders. Users could assign specific security rights on each and every document and folder within a project. And it was possible to enable these ACL's without having the Collaboration Manager component enabled. But it took some special instructions (see technote# 603148.1) and added some extraneous pieces still related to Collaboration Manager. When 11g came out, Collaboration Manager was no longer available. But the configuration settings to turn on ACLs were still there. Well, in PS3 they've been cleaned up a bit and a new configuration flag has been added to simply turn on the ACL fields and none of the other collaboration bits. To enable ACLs: UseEntitySecurity=true Along with this configuration flag to turn ACLs on, you also need to define which Security Groups will honor the ACL fields. If an ACL is applied to a content item with a Security Group outside this list, it will be ignored. SpecialAuthGroups=HumanResources,Legal,Marketing Save the settings and restart the instance. Upon restart, two new metadata fields will be created: xClbraUserList, xClbraAliasList. If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection. On the Check In, Search, and Update pages, values are added by simply typing in the value and getting a type-ahead list of possible values. Select the value, click Add and then set the level of access (Read, Write, Delete, or Admin). If all of the fields are blank, then it simply falls back to just Security Group and Account access. As for how they are stored in the metadata fields, each entry starts with it's identifier: ampersand (&) symbol for users, "at" (@) symbol for groups, and colon (:) for roles. Following that is the entity name. And at the end is the level of access in paranthesis. e.g. (RWDA). And each entry is separated by a comma. So if you were populating values through batch loader or an external source, the values would be defined this way. Detailed information on Access Control Lists can be found in the Oracle Fusion Middleware System Administrator's Guide for Oracle Content Server.

    Read the article

  • What is Database Continuous Integration?

    - by David Atkinson
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • What is Database Continuous Integration?

    - by SQLDev
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • Visual Studio 2010 and Target Framework Version

    - by Scott Dorman
    Almost two years ago, I wrote about a Visual Studio macro that allows you to change the Target Framework version of all projects in a solution. If you don’t know, the Target Framework version is what tells the compiler which version of the .NET Framework to compile against (more information is available here) and can be set to one of the following values: .NET Framework 2.0 .NET Framework 3.0 .NET Framework 3.5 .NET Framework 3.5 Client Profile .NET Framework 4.0 .NET Framework 4.0 Client Profile This can be easily accomplished by editing the project properties: The problem with this approach is that if you need to change a lot of projects at one time it becomes rather unwieldy. One possible solution is to edit the project files by hand in a text editor and change the <TargetFrameworkVersion /> and <TargetFrameworkProfile /> properties to the correct values. For example, for the .NET Framework 4.0 Client Profile, these values would be: <TargetFrameworkVersion>v4.0</TargetFrameworkVersion> <TargetFrameworkProfile>Client</TargetFrameworkProfile> Again, this is not only time consuming but can also be error-prone. The better solution is to automate this through the use of a Visual Studio macro. Since I had already created a macro to do this for Visual Studio 2008, I updated that macro to work with Visual Studio 2010 and .NET 4.0. It prompts you for the target framework version you want to set for all of the projects and then loops through each project in the solution and makes the change. If you select one of the Framework versions that support a Client Profile, it will ask if you want to use the Client Profile or the Full Profile. It is smart enough to skip project types that don’t support this property and projects that are already at the correct version. This version also incorporates the changes suggested by George (in the comments). The macro is available on my SkyDrive account. Download it to your <UserProfile>\Documents\Visual Studio 2010\Projects\VSMacros80\MyMacros folder, open the Visual Studio Macro IDE (Alt-F11) and add it as an existing item to the “MyMacros” project. I make no guarantees or warranties on this macro. I have tested it on several solutions and projects and everything seems to work and not cause any problems, but, as always, use with caution. Since it is a macro, you have the full source code available to investigate and see what it’s actually doing. If you find any bugs or make any useful changes, please let me know and I’ll update the macro. Technorati Tags: Macros,Visual Studio

    Read the article

  • Coding different states in Adventure Games

    - by Cardin
    I'm planning out an adventure game, and can't figure out what's the right way to implement the behaviour of a level depending on state of story progression. My single-player game features a huge world where the player has to interact with people in a town at various points in the game. However, depending on story progression, different things would be presented to the player, for e.g. the Guild Leader will change locations from the town square to various locations around the city; Doors would only unlock at certain times of the day after finishing a particular routine; Different cut-screen/trigger events happen only after a particular milestone has been reached. I naively thought of using a switch{} statement initially to decide what the NPC should say or which he could be found at, and making quest objectives interact-able only after checking a global game_state variable's condition. But I realised I would quickly run into a lot of different game states and switch-cases in order to change the behaviour of an object. That switch statement would also be massively hard to debug, and I guess it might also be hard to use in a level editor. So I thought, instead of having a single object with multiple states, maybe I should have multiple instances of the same object, with a single state. That way, if I use something like a level editor, I can put an instance of the NPC at all the different locations he could ever appear at, and also an instance for each conversation state he has. But that means there'll be a lot of inactive, invisible game objects floating around the level, which might be trouble for memory, or simply hard to see in a level editor, i don't know. Or simply, make an identical, but separate level for each game state. This feels the cleanest and bug-free way to do things, but it feels like massive manual work making sure each version of the level is really identical to each other. All my methods feel so inefficient, so to recap my question, is there a better or standardised way to implement behaviour of a level depending on state of story progression? PS: I don't have a level editor yet - thinking of using something like JME SDK or making my own.

    Read the article

  • Running Teamsite User Admin tool IWUSERADM.exe from ASP.NET

    - by Narendra Tiwari
    It has really been a head scratching task for me. I 've tried many options but nothing worked. Finally I found a workaround on google to achive this by TaskScheduler. PROBLEM When we run Teamsite user administration command line tool IWUSERADM.exe though ASP.Net it gives following error: Application popup: cmd.exe - Application Error : The application failed to initialize properly (0xc0000142). Click on OK to terminate the application. CAUSE No specific cause, it seems to be a bug, supposed to be resolved with this Microsoft patch http://support.microsoft.com/kb/960266. and there is nothing related to permission issue, y web application is impersonated with an administrator account. off course running a bat file from dmin account is a potential secury threat but for this scenario lets conifned our discussion to run the command line tool. RESOLUTION I have not tried this patch as I have not permitted to run this patch on server. Below are the steps to achive the requirement. 1/ Create a batch file which runs the IWUSERADM.exe.         echo - Add Teamsite User    CD E:\Appli\GN00\iw-home\bin    iwuseradm add-user %1 2/ Temporarily create a schedule task and run  the .bat file by scheduled task by ASP.Net code using TaskScheduler http://www.codeproject.com/KB/cs/tsnewlib.aspx. 3/ Here is the function: private int AddTeamsiteUser(string strBatchFilePath, string strUser) { //Get a ScheduledTasks object for the local computer. ScheduledTasks st = new ScheduledTasks(); // Create a task Task t; try{ t = st.CreateTask("~AddTeamsiteUser"); } catch { throw new Exception("Schedule Task ~AddTeamsiteUser already exist."); }    t.SetAccountInformation(yourLogin, yourPassword); //Set the account under which the task should run.  t.Save();  t.Run(); Thread.Sleep(2000); //for sync issue //Remove the scheduled task st.DeleteTask("~AddTeamsiteUser"); return t.ExitCode;   Below are few resources related to the above scenario:- - Task Scheduler Class Library for .NET  http://www.codeproject.com/KB/cs/tsnewlib.aspx - Run a .BAT file from ASP.NET  http://codebetter.com/blogs/brendan.tompkins/archive/2004/05/13/13484.aspx - TaskScheduler Class  http://msdn.microsoft.com/en-us/library/system.threading.tasks.taskscheduler.aspx - Application Hangs whle running iwuseradm.exe through ASP.Net  http://bytes.com/topic/asp-net/answers/733098-system-diagnostics-process-hangs     t.ApplicationName = strBatchFilePath; t.Parameters = strUser; t.Comment = "Adding user to Teamsite Application"

    Read the article

  • ObjectStorageHelper<T> now available for Windows 8 RTM

    - by jamiet
    In October 2011 I wrote a blog post entitled ObjectStorageHelper<T> – A WinRT utility for Windows 8 where I introduced a little utility class called ObjectStorageHelper<T> that I had been working on while noodling around on the Developer Preview of Windows 8. ObjectStorageHelper<T> makes it easy for anyone building apps for Windows 8 to save data to files. How easy? As easy as this: var myPoco = new Poco() { IntProp = 1, StringProp = "one" }; var objectStorageHelper = new ObjectStorageHelper<Poco>(StorageType.Local); await objectStorageHelper.SaveAsync(myPoco); Compare that to the plumbing code that you would have to write otherwise: var Obj = new Poco() { IntProp = 1, StringProp = "one" }; StorageFile file = null; StorageFolder folder = GetFolder(storageType); file = await folder.CreateFileAsync(FileName(Obj), CreationCollisionOption.ReplaceExisting); IRandomAccessStream writeStream = await file.OpenAsync(FileAccessMode.ReadWrite); using (Stream outStream = Task.Run(() => writeStream.AsStreamForWrite()).Result) {     serializer.Serialize(outStream, Obj);     await outStream.FlushAsync(); } and you can see how ObjectStorageHelper<T> can help save a Windows 8 developer quite a few headaches. ObjectStorageHelper<T> simply requires you to pass it an object to be saved, tell it where to save it (Roaming, Local or Temporary), and you’re done. Retrieving an object from storage is equally as simple: var objectStorageHelper = new ObjectStorageHelper<Poco>(StorageType.Local); var myPoco = await objectStorageHelper.LoadAsync(); Please check the homepage for the project at http://winrtstoragehelper.codeplex.com/ for (much) more info. A number of people have used and tested ObjectStorageHelper<T> since those early days and one of those folks in particular, David Burela, was good enough to report a couple of bugs: Saving Asynchronously Save fails when class is in another project As a result of David’s bug reports and some more extensive testing on my side I have overhauled the initial code that I wrote last October and am confident that it is now much more robust and ready for primetime (check the commit history if you’re interested). The source code (which, again, you can find on Codeplex at http://winrtstoragehelper.codeplex.com/) includes a suite of unit tests to test all of the basic use cases (if you can think of any more please let me know). If you use this in any of your Windows 8 projects then please let me know. I love getting feedback and I’d also love to know if this is actually being used anywhere. @Jamiet

    Read the article

  • Thinktecture.IdentityServer RC

    - by Your DisplayName here!
    I just uploaded the RC of IdentityServer to Codeplex. This release is feature complete and if I don’t get any bug reports this is also pretty much the final V1. Changes from B1 The configuration data access is now based on EF 4.1 code first. This makes it much easier to use different data stores. For RTM I will also provide a SQL script for SQL Server so you can move the configuration to a separate machine (e.g. for load balancing scenarios). I included the ASP.NET Universal Providers in the download. This adds official support for SQL Azure, SQL Server and SQL Compact for the membership, roles and profile features. Unfortunately the Universal Provider use a different schema than the original ASP.NET providers (that sucks btw!) – so I made them optional. If you want to use them go to web.config and uncomment the new provider. The relying party registration entries now have added fields to add extra data that you want to couple with the RP. One use case could be to give the UI a hint how the login experience should look like per RP. This allows to have a different look and feel for different relying parties. I also included a small helper API that you can use to retrieve the RP record based on the incoming WS-Federation query string. WS-Federation single sign out is now conforming to the spec. Certificate based endpoint identities for SSL endpoints are optional now. Added a initial configuration “wizard”. This sets up the signing certificate, issuer URI and site title on the first run. Installation This is still a “developer” release – that means it ships with source code that you have to build it etc. But from that point it should be a little more straightforward as it used to be: Make sure SSL is configured correctly for IIS Map the WebSite directory to a vdir in IIS Run the web site. This should bring up the initial configuration Make sure the worker process account has access to the signing certificate private key Make sure all your users are in the “IdentityServerUsers” role in your role store. Administrators need the “IdentityServerAdministrators” role That should be it. A proper documentation will be hopefully available soon (any volunteers?). Please provide feedback! thanks!

    Read the article

  • My Package Version Number Appears Greater Yet apt-get Doesn't Select It

    - by nutznboltz
    Backstory: It was determined that when using lxc container VMs the Nagios nrpe shutdown script when run on the host of the containers would kill the nrpe processes inside the containers. This was remediated by changing the script to use pidfiles instead of searching the process table for the nrpe process. Regrettably start-stop-daemon is a C program that resulted from translating a Perl script and it shows. There are far too many global varibles in start-stop-daemon.c and although there are some nice blocks of comments there are far to few comments that explain the intent behind variable names such as "schedule" (the string "schedule" appears in many contexts.) The manual page for start-stop-daemon strongly suggests that unless you use the "--retry" option the start-stop-daemon program may return before the process it sent a signal to actually calls exit() and terminates, however it doesn't actually state this in plain English. The obtuseness of start-stop-daemon is most likely the reason that the "fixed" version of the script includes a dubious comment indicating that sometimes the pid file has not been removed. I can easily see why someone would not understand that he left the --retry option missing. This bug also causes failures when the script is given the "restart" option; the nrpe daemon will shutdown but not start up again. Did I mention that since applying the update our nrpe servers started crashing over and over? Repairing this is why I am doing this work. I have been working on remediating the fix. You can see my current work in this PPA. Actual Question: The upstream version number of nagios-nrpe-server in lucid-updates is 2.12-4ubuntu1.10.04.1 My PPA uses this version number 2.12-4ubuntu1.10.04.1.1~ppa1~lucid1 I check the rules here and use this test program and I am lead to believe that the version number I use in my PPA is greater than the one in lucid-updates yet when I ran: sudo add-apt-repository ppa:nutznboltz/nrpe-unbreak-lp-600941 sudo apt-get update sudo aptitiude dist-upgrade The replacement package was not installed. I was able to install it using sudo aptitude install nagios-nrpe-server=2.12-4ubuntu1.10.04.1.1~ppa1~lucid1 Can anyone explain this behavior? Why didn't my version number appear greater to "aptitude dist-upgrade"? Thanks $ cat /etc/apt/preferences Package: * Pin: release a=lucid-backports Pin-Priority: 400 Package: * Pin: release a=lucid-security Pin-Priority: 990 Package: * Pin: release a=lucid-updates Pin-Priority: 900 Package: * Pin: release a=lucid-proposed Pin-Priority: 400 $ ls /etc/apt/preferences.d/ $ Should not make any difference as a PPA cannot be in any of those pockets. I went ahead and bumped the version number in the PPA to 2.12-4ubuntu1.10.04.2~ppa1~lucid1. I'll see if that makes a difference. I do notice that lintian complains: W: nagios-nrpe-server: debian-revision-not-well-formed 2.12-4ubuntu1.10.04.2~ppa1~lucid1

    Read the article

  • How (recipe) to build only one kernel module?

    - by Pro Backup
    I have a bug in a Linux kernel module that causes the stock Ubuntu 14.04 kernel to oops (crash). That is why I want to edit/patch the source of only that single kernel module to add some extra debug output. The kernel module in question is mvsas and not necessary to boot. For that reason I don't see any need to update any initrd images. I have read a lot of information (as shown below) and find the setup and build process confusion. I need two recipes: to setup/configure the build environment once steps to do after editing any source file of this kernel module (.c and .h) and converting that edit into a new kernel module (.ko) The sources that have been used are: build one kernel module - Google search http://www.linuxquestions.org/questions/linux-kernel-70/rebuilding-a-single-kernel-module-595116/ http://stackoverflow.com/questions/8744087/how-to-recompile-just-a-single-kernel-module http://www.pixelbeat.org/docs/rebuild_kernel_module.html How do I build a single in-tree kernel module? http://ubuntuforums.org/showthread.php?t=1153067 http://ubuntuforums.org/showthread.php?t=2112166 http://ubuntuforums.org/showthread.php?t=1115593 build one kernel module ubuntu - Google search 'make +single +kernel +module' - Ask Ubuntu 'make +kernel +module' - Ask Ubuntu My makefile results in: No rule to make target `arch/x86/tools/relocs.c', needed '"Invalid module format"' - Ask Ubuntu Driver installation: compiling source code for newer kernel Modprobe: 'Invalid nodule format', yet works after insmod "Symbol version dump" "is missing" - Google search http://stackoverflow.com/questions/9425523/should-i-care-that-the-symbol-version-dump-is-missing-how-do-i-get-one Where can I find the corresponding Module.symvers and .config files for 12.04.3 i386 server? "no symbol version for module_layout" when trying to load usbhid.ko Broken links inside Linux header file folder 'make modules_install' - Ask Ubuntu 'modules_install' - Ask Ubuntu Empty build directory in custom compiled kernel Not able to see pr_info output In which directory are the kernel source files and how can I recompile it? How can I compile and install that patched libata-eh.c file? 'modules_install +depmod' - Ask Ubuntu modules_install depmod - Google search "make modules_install" - Google search http://www.csee.umbc.edu/courses/undergraduate/CMSC421/fall02/burt/projects/howto_build_kernel.html http://unix.stackexchange.com/questions/20864/what-happens-in-each-step-of-the-linux-kernel-building-process https://wiki.ubuntu.com/KernelCustomBuild http://www.cyberciti.biz/tips/build-linux-kernel-module-against-installed-kernel-source-tree.html http://www.linuxforums.org/forum/kernel/170617-solved-make-modules_install-different-path.html "make prepare" - Google search "make prepare" "scripts/kconfig/conf --silentoldconfig Kconfig" - Google search http://ubuntuforums.org/showthread.php?t=1963515 ubuntu "make prepare" version - Google search http://stackoverflow.com/questions/8276245/how-to-compile-a-kernel-module-against-a-new-source https://help.ubuntu.com/community/Kernel/Compile How do I compile a kernel module? How to add a custom driver to my kernel? Compile and loading kernel module without compiling the kernel

    Read the article

  • Safely deploying changes to production servers

    - by oazabir
    When you deploy incremental changes on a production server, which is running and live all the time, you some times see error messages like “Compiler Error Message: The Type ‘XXX’ exists in both…”. Sometimes you find Application_Start event not firing although you shipped a new class, dll or web.config. Sometimes you find static variables not getting initialized and so on. There are so many weird things happen on webservers when you incrementally deploy changes to the server and the server has been up and running for several weeks. So, I came up with a full proof house keeping steps that we always do whenever we deploy some incremental change to our websites. These steps ensure that the web sites are properly recycled , cached are cleared, all the data stored at Application level is initialized. First of all you should have multiple web servers behind load balancer. This way you can take one server our of the production traffic, do your deployment and house keeping tasks like restarting IIS, and then put it back. Then you can do it for the second server and so on. This ensures there’s no outage for customer. If you can do it reasonable fast, hopefully customers won’t notice discrepancy between the servers some having new code and some having old code. You should only do this when your changes aren’t drastic. For ex, you aren’t delivering a complete revamped UI. In that case, some users hitting server1 with latest UI will suddenly get a completely different experience and then on next page refresh, they might hit server2 with old code and get a totally different experience. This works for incremental non-dramatic changes only.   During deployment you should follow these steps: Take server X out of load balancer so that it does not get any traffic. Stop all windows services on the server. Stop IIS. Delete the Temporary ASP.NET folders of all .NET versions incase you have multiple .NET versions running. You can follow this link. Deploy the changes. Flush any distributed cache you have, for ex, Velocity or Memcached. Start IIS. Start the windows services on the server. Warm up all websites by hitting major URLs on the websites. You should have some automated script to do this. You can use tinyget to hit some major URLs, especially pages that take a lot of time to compile. Read my post on keeping websites warm with zero coding. Put server X back to load balancer so that it starts receiving traffic. That’s it. It should give you a clean deployment and prevent unexpected errors. You should print these steps and hang on the desk of your deployment guys so that they never forget during deployment pressure.

    Read the article

  • Github Organization Repositories, Issues, Multiple Developers, and Forking - Best Workflow Practices

    - by Jim Rubenstein
    A weird title, yes, but I've got a bit of ground to cover I think. We have an organization account on github with private repositories. We want to use github's native issues/pull-requests features (pull requests are basically exactly what we want as far as code reviews and feature discussions). We found the tool hub by defunkt which has a cool little feature of being able to convert an existing issue to a pull request, and automatically associate your current branch with it. I'm wondering if it is best practice to have each developer in the organization fork the organization's repository to do their feature work/bug fixes/etc. This seems like a pretty solid work flow (as, it's basically what every open source project on github does) but we want to be sure that we can track issues and pull requests from ONE source, the organization's repository. So I have a few questions: Is a fork-per-developer approach appropriate in this case? It seems like it could be a little overkill. I'm not sure that we need a fork for every developer, unless we introduce developers who don't have direct push access and need all their code reviewed. In which case, we would want to institute a policy like that, for those developers only. So, which is better? All developers in a single repository, or a fork for everyone? Does anyone have experience with the hub tool, specifically the pull-request feature? If we do a fork-per-developer (or even for less-privileged devs) will the pull-request feature of hub operate on the pull requests from the upstream master repository (the organization's repository?) or does it have different behavior? EDIT I did some testing with issues, forks, and pull requests and found that. If you create an issue on your organization's repository, then fork the repository from your organization to your own github account, do some changes, merge to your fork's master branch. When you try to run hub -i <issue #> you get an error, User is not authorized to modify the issue. So, apparently that work flow won't work.

    Read the article

  • Sniffing out SQL Code Smells: Inconsistent use of Symbolic names and Datatypes

    - by Phil Factor
    It is an awkward feeling. You’ve just delivered a database application that seems to be working fine in production, and you just run a few checks on it. You discover that there is a potential bug that, out of sheer good chance, hasn’t kicked in to produce an error; but it lurks, like a smoking bomb. Worse, maybe you find that the bug has started its evil work of corrupting the data, but in ways that nobody has, so far detected. You investigate, and find the damage. You are somehow going to have to repair it. Yes, it still very occasionally happens to me. It is not a nice feeling, and I do anything I can to prevent it happening. That’s why I’m interested in SQL code smells. SQL Code Smells aren’t necessarily bad practices, but just show you where to focus your attention when checking an application. Sometimes with databases the bugs can be subtle. SQL is rather like HTML: the language does its best to try to carry out your wishes, rather than to be picky about your bugs. Most of the time, this is a great benefit, but not always. One particular place where this can be detrimental is where you have implicit conversion between different data types. Most of the time it is completely harmless but we’re  concerned about the occasional time it isn’t. Let’s give an example: String truncation. Let’s give another even more frightening one, rounding errors on assignment to a number of different precision. Each requires a blog-post to explain in detail and I’m not now going to try. Just remember that it is not always a good idea to assign data to variables, parameters or even columns when they aren’t the same datatype, especially if you are relying on implicit conversion to work its magic.For details of the problem and the consequences, see here:  SR0014: Data loss might occur when casting from {Type1} to {Type2} . For any experienced Database Developer, this is a more frightening read than a Vampire Story. This is why one of the SQL Code Smells that makes me edgy, in my own or other peoples’ code, is to see parameters, variables and columns that have the same names and different datatypes. Whereas quite a lot of this is perfectly normal and natural, you need to check in case one of two things have gone wrong. Either sloppy naming, or mixed datatypes. Sure it is hard to remember whether you decided that the length of a log entry was 80 or 100 characters long, or the precision of a number. That is why a little check like this I’m going to show you is excellent for tidying up your code before you check it back into source Control! 1/ Checking Parameters only If you were just going to check parameters, you might just do this. It simply groups all the parameters, either input or output, of all the routines (e.g. stored procedures or functions) by their name and checks to see, in the HAVING clause, whether their data types are all the same. If not, it lists all the examples and their origin (the routine) Even this little check can occasionally be scarily revealing. ;WITH userParameter AS  ( SELECT   c.NAME AS ParameterName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  t.name + ' '     + CASE     --we may have to put in the length            WHEN t.name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.max_length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.name IN ('nchar', 'nvarchar')                      THEN c.max_length / 2 ELSE c.max_length                    END)                END + ')'         WHEN t.name IN ('decimal', 'numeric')             THEN '(' + CONVERT(VARCHAR(4), c.precision)                   + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = c.XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType]  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'   AND parameter_id>0)SELECT CONVERT(CHAR(80),objectName+'.'+ParameterName),DataType FROM UserParameterWHERE ParameterName IN   (SELECT ParameterName FROM UserParameter    GROUP BY ParameterName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY ParameterName   so, in a very small example here, we have a @ClosingDelimiter variable that is only CHAR(1) when, by the looks of it, it should be up to ten characters long, or even worse, a function that should be a char(1) and seems to let in a string of ten characters. Worth investigating. Then we have a @Comment variable that can't decide whether it is a VARCHAR(2000) or a VARCHAR(MAX) 2/ Columns and Parameters Actually, once we’ve cleared up the mess we’ve made of our parameter-naming in the database we’re inspecting, we’re going to be more interested in listing both columns and parameters. We can do this by modifying the routine to list columns as well as parameters. Because of the slight complexity of creating the string version of the datatypes, we will create a fake table of both columns and parameters so that they can both be processed the same way. After all, we want the datatypes to match Unfortunately, parameters do not expose all the attributes we are interested in, such as whether they are nullable (oh yes, subtle bugs happen if this isn’t consistent for a datatype). We’ll have to leave them out for this check. Voila! A slight modification of the first routine ;WITH userObject AS  ( SELECT   Name AS DataName,--the actual name of the parameter or column ('@' removed)  --and the qualified object name of the routine  OBJECT_SCHEMA_NAME(ObjectID) + '.' + OBJECT_NAME(ObjectID) AS ObjectName,  --now the harder bit: the definition of the datatype.  TypeName + ' '     + CASE     --we may have to put in the length. e.g. CHAR (10)           WHEN TypeName IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN MaxLength = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN TypeName IN ('nchar', 'nvarchar')                      THEN MaxLength / 2 ELSE MaxLength                    END)                END + ')'         WHEN TypeName IN ('decimal', 'numeric')--a BCD number!             THEN '(' + CONVERT(VARCHAR(4), Precision)                   + ',' + CONVERT(VARCHAR(4), Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0 --tush tush. XML         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT TOP 1 QUOTENAME(ss.name) + '.' + QUOTENAME(sc.Name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType],       DataObjectType  FROM   (Select t.name AS TypeName, REPLACE(c.name,'@','') AS Name,          c.max_length AS MaxLength, c.precision AS [Precision],           c.scale AS [Scale], c.[Object_id] AS ObjectID, XML_collection_ID,          is_XML_Document,'P' AS DataobjectType  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  AND parameter_id>0  UNION all  Select t.name AS TypeName, c.name AS Name, c.max_length AS MaxLength,          c.precision AS [Precision], c.scale AS [Scale],          c.[Object_id] AS ObjectID, XML_collection_ID,is_XML_Document,          'C' AS DataobjectType            FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID   WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'  )f)SELECT CONVERT(CHAR(80),objectName+'.'   + CASE WHEN DataobjectType ='P' THEN '@' ELSE '' END + DataName),DataType FROM UserObjectWHERE DataName IN   (SELECT DataName FROM UserObject   GROUP BY DataName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY DataName     Hmm. I can tell you I found quite a few minor issues with the various tabases I tested this on, and found some potential bugs that really leap out at you from the results. Here is the start of the result for AdventureWorks. Yes, AccountNumber is, for some reason, a Varchar(10) in the Customer table. Hmm. odd. Why is a city fifty characters long in that view?  The idea of the description of a colour being 256 characters long seems over-ambitious. Go down the list and you'll spot other mistakes. There are no bugs, but just mess. We started out with a listing to examine parameters, then we mixed parameters and columns. Our last listing is for a slightly more in-depth look at table columns. You’ll notice that we’ve delibarately removed the indication of whether a column is persisted, or is an identity column because that gives us false positives for our code smells. If you just want to browse your metadata for other reasons (and it can quite help in some circumstances) then uncomment them! ;WITH userColumns AS  ( SELECT   c.NAME AS columnName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  REPLACE(t.name + ' '   + CASE WHEN is_computed = 1 THEN ' AS ' + --do DDL for a computed column          (SELECT definition FROM sys.computed_columns cc           WHERE cc.object_id = c.object_id AND cc.column_ID = c.column_ID)     --we may have to put in the length            WHEN t.Name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.Max_Length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.Name IN ('nchar', 'nvarchar')                      THEN c.Max_Length / 2 ELSE c.Max_Length                    END)                END + ')'       WHEN t.name IN ('decimal', 'numeric')       THEN '(' + CONVERT(VARCHAR(4), c.precision) + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'       ELSE ''      END + CASE WHEN c.is_rowguidcol = 1          THEN ' ROWGUIDCOL'          ELSE ''         END + CASE WHEN XML_collection_ID <> 0            THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                THEN 'DOCUMENT '                ELSE 'CONTENT '               END + COALESCE((SELECT                QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM                sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE                sc.xml_collection_ID = c.XML_collection_ID),                'NULL') + ')'            ELSE ''           END + CASE WHEN is_identity = 1             THEN CASE WHEN OBJECTPROPERTY(object_id,                'IsUserTable') = 1 AND COLUMNPROPERTY(object_id,                c.name,                'IsIDNotForRepl') = 0 AND OBJECTPROPERTY(object_id,                'IsMSShipped') = 0                THEN ''                ELSE ' NOT FOR REPLICATION '               END             ELSE ''            END + CASE WHEN c.is_nullable = 0               THEN ' NOT NULL'               ELSE ' NULL'              END + CASE                WHEN c.default_object_id <> 0                THEN ' DEFAULT ' + object_Definition(c.default_object_id)                ELSE ''               END + CASE                WHEN c.collation_name IS NULL                THEN ''                WHEN c.collation_name <> (SELECT                collation_name                FROM                sys.databases                WHERE                name = DB_NAME()) COLLATE Latin1_General_CI_AS                THEN COALESCE(' COLLATE ' + c.collation_name,                '')                ELSE ''                END,'  ',' ') AS [DataType]FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys')SELECT CONVERT(CHAR(80),objectName+'.'+columnName),DataType FROM UserColumnsWHERE columnName IN (SELECT columnName FROM UserColumns  GROUP BY columnName  HAVING MIN(Datatype)<>MAX(DataType))ORDER BY columnName If you take a look down the results against Adventureworks, you'll see once again that there are things to investigate, mostly, in the illustration, discrepancies between null and non-null datatypes So I here you ask, what about temporary variables within routines? If ever there was a source of elusive bugs, you'll find it there. Sadly, these temporary variables are not stored in the metadata so we'll have to find a more subtle way of flushing these out, and that will, I'm afraid, have to wait!

    Read the article

  • How do you play or record audio (to .WAV) on Linux in C++? [closed]

    - by Jacky Alcine
    Hello, I've been looking for a way to play and record audio on a Linux (preferably Ubuntu) system. I'm currently working on a front-end to a voice recognition toolkit that'll automate a few steps required to adapt a voice model for PocketSphinx and Julius. Suggestions of alternative means of audio input/output are welcome, as well as a fix to the bug shown below. Here is the current code I've used so far to play a .WAV file: void Engine::sayText ( const string OutputText ) { string audioUri = "temp.wav"; string requestUri = this->getRequestUri( OPENMARY_PROCESS , OutputText.c_str( ) ); int error , audioStream; pa_simple *pulseConnection; pa_sample_spec simpleSpecs; simpleSpecs.format = PA_SAMPLE_S16LE; simpleSpecs.rate = 44100; simpleSpecs.channels = 2; eprintf( E_MESSAGE , "Generating audio for '%s' from '%s'..." , OutputText.c_str( ) , requestUri.c_str( ) ); FILE* audio = this->getHttpFile( requestUri , audioUri ); fclose(audio); eprintf( E_MESSAGE , "Generated audio."); if ( ( audioStream = open( audioUri.c_str( ) , O_RDONLY ) ) < 0 ) { fprintf( stderr , __FILE__": open() failed: %s\n" , strerror( errno ) ); goto finish; } if ( dup2( audioStream , STDIN_FILENO ) < 0 ) { fprintf( stderr , __FILE__": dup2() failed: %s\n" , strerror( errno ) ); goto finish; } close( audioStream ); pulseConnection = pa_simple_new( NULL , "AudioPush" , PA_STREAM_PLAYBACK , NULL , "openMary C++" , &simpleSpecs , NULL , NULL , &error ); for (int i = 0;;i++ ) { const int bufferSize = 1024; uint8_t audioBuffer[bufferSize]; ssize_t r; eprintf( E_MESSAGE , "Buffering %d..",i); /* Read some data ... */ if ( ( r = read( STDIN_FILENO , audioBuffer , sizeof (audioBuffer ) ) ) <= 0 ) { if ( r == 0 ) /* EOF */ break; eprintf( E_ERROR , __FILE__": read() failed: %s\n" , strerror( errno ) ); if ( pulseConnection ) pa_simple_free( pulseConnection ); } /* ... and play it */ if ( pa_simple_write( pulseConnection , audioBuffer , ( size_t ) r , &error ) < 0 ) { fprintf( stderr , __FILE__": pa_simple_write() failed: %s\n" , pa_strerror( error ) ); if ( pulseConnection ) pa_simple_free( pulseConnection ); } usleep(2); } /* Make sure that every single sample was played */ if ( pa_simple_drain( pulseConnection , &error ) < 0 ) { fprintf( stderr , __FILE__": pa_simple_drain() failed: %s\n" , pa_strerror( error ) ); if ( pulseConnection ) pa_simple_free( pulseConnection ); } } NOTE: If you want the rest of the code to this file, you can download it here directly from Launchpad.

    Read the article

  • Silverlight Cream for November 17, 2011 -- #1168

    - by Dave Campbell
    In this Issue: Colin Eberhardt, Lazar Nikolov, WindowsPhoneGeek, Jesse Liberty, Peter Kuhn, Derik Whittaker, Chris Koenig, and Jeff Blankenburg(-2-). Above the Fold: Silverlight: "Facebook Graph API and Silverlight Part 2 – Publishing data" Lazar Nikolov WP7: "Suppressing Zoom and Scroll interactions in the Windows Phone 7 WebBrowser Control" Colin Eberhardt Metro/WinRT/W8: "Tip/Trick when working with the Application Bar in WinRT/Metro (C#)" Derik Whittaker Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up Pete Brown announced the completion of his book: It’s a wrap! I’ve completed writing Silverlight 5 in Action From SilverlightCream.com: Suppressing Zoom and Scroll interactions in the Windows Phone 7 WebBrowser Control Colin Eberhardt's latest post is all about a helper class he wrote to suppress scrolling and pinch zoom of the WP7 browser control, which you might want to do if the browser is placed inside another control. Facebook Graph API and Silverlight Part 2 – Publishing data In this part 2 of his Facebook and Silverlight series, Lazar Nikolov shows how to post data to your profile or your friends' profiles Localizing a Windows Phone app Step by Step WindowsPhoneGeek's latest post is on Localizing a WP7 app .. another great tutorial with plenty of discussion, pictures, and a project to load up and follow Background Audio Part II: Copying Audio Files To Isolated Storage Continuing his WP7 series, Jesse Liberty has Part 2 of a mini-series on Background Audio up... in this episode he's using local audio and to do so, it must be in ISO Silverlight: Bugs in the multicast client In a Q/A session, Peter Kuhn was presented a nasty bug in the multicast client that he has verified exists in not only Silverlight 4 but also Silverlight 5 Beta, including a link to his entry on Connect. Tip/Trick when working with the Application Bar in WinRT/Metro (C#) Derik Whittaker offers up some good information about the Metro Application Bar and how to keep it where you want it New! Windows Phone Starter Kit for Podcasts Chris Koenig announced the release of a new starter kit for WP7... a starter kit for podcasts. Check out the links on Chris' site and the other two starter kits that are available 31 Days of Mango | Day #4: Compass Jeff Blankenburg continues with Day 4 of his Mango series with this post on the Compass and a cool app to demonstrate it 31 Days of Mango | Day #5: Gyroscope In Day 5, Jeff Blankenburg is talking about and discussing the gyroscope, of course if you have a phone as old as mine, you won't have a gyroscope and it's not on the emulator Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Agile PLM 9.3 Service Pack 2 (SP2 or 9.3.0.2) is released along with AUT 1.6.2.0 and AutoVue 20 for

    - by Shane Goodwin
    Oracle released Agile PLM 9.3 SP2 on June 14 and the Agile installer for AutoVue 20 for Agile PLM on April 30. Also available are the new versions of AUT and Averify - 1.6.3 for both tools. 9.3 SP2 is a combined English and NLS release for use on any version of 9.3.0. SP2 contains many bug fixes and rolls up several Hot Fixes - please review the Readme for all the details. In addition, this release also addresses some scalability issues when working with very large Exports and Reports. When exporting very large BOMs, the export module will now release objects more efficiently to reduce the amount of memory consumed on the Application Server. Adminstrators can also control the maximum row limits for Users verses system processes, like ACS. Several out of the box BOM reports have also been changed to use a new row limit option. The combination of all these changes will provide more stability on the application server for customers managing very large datasets. 9.3 SP2 also adds support for Oracle Database 11gR2 for Windows, Oracle Internet Directory (OID) and Oracle Access Manager (OAM). Please note that currently the Variant Patch is not intended to be released for SP2. Customers running the Variant Patch should remain on 9.3.0.0 or 9.3.0.1. Back in April, we also released the AutoVue 20 for Agile PLM installer. AutoVue 20 has many new features which will help Agile PLM customers. Large multi-page Word documents and 2D CAD documents will open more quickly to the first page or first rendition. Memory usage is less when working with 3D Models. There are many new formats supported for MCAD, 2D Cad, and EDA. AutoVue 20 is immediately available for Windows and Linux platforms. The new software can be found in Edelivery or Metalink / Oracle Support: - AutoVue 20 for Agile PLM is on E-Delivery with part number B58963-01 - Oracle Agile PLM 9.3 Service Pack 2 (9.3.0.2) My Oracle Support Patch ID 9782736 - AVERIFY 1.6.3 My Oracle Support Patch ID 9791892 - AUT 1.6.3 My Oracle Support Patch ID 9791908 - Agile PLM 9.3 SP2 Documentation is available on the OTN Agile Documentation Page

    Read the article

  • Set up a TFS Server/Service demo environment in less than 1 minute now!

    - by Tarun Arora
    Release Notes – http://tfsdemosetup.codeplex.com/  | Download | Source Code | Report a Bug | Ideas To Demonstrate the capabilities of TFS 2012 Server/Service Task board you would need to set up TFS with some teams, a few team members, some sample stories, tasks, etc. That’s too many steps if you as me! Hi! My name is Tarun Arora, I am a Microsoft MVP in Visual Studio ALM & a Visual Studio ALM Ranger, as a consultant I have had to demo TFS Preview to potential customers several times a day. I usually create the team project during the demo to show off how quick and efficient it is, but setting up teams, team members, tasks usually takes longer I don’t prefer carrying out these steps during the demo. I have developed a .net based console application which uses the TFS API to create a standard demo environment saving me from all these manual steps. The console application reads the set up information from an XML file, leaving the setup process highly customizable. Figure 1 – Demo Dictionary, change values here for unique setup The console application today sets up, 1. Create a new Team 2. Set the team as the default team 3. Configure team settings      a. Set Backlog Iteration path      b. Set Team Iterations and start & finish dates      c. Set Team Area path 4. Add Team Members 5. Add Product Backlog Items & linked Tasks. Image 2 – The team website before (on the left) and after (on the right) running the console app Image 3 – Team configuration before (on left) and after (on right) with new team Demo and 2 members Image 4 – Iteration configuration before (on left) and after (on right) with new backlog iteration path & sprint dates set Image 5 – Area configuration (on left) and after (on right) with area path configured for the team   Image 6 – A demo ready Task Board and Task Board for Team Members Credits, - Mattias Sköld [Visual Studio ALM Ranger] – I have used TfsTeamTools to perform team creation & add members - Ivan Popek – TFS 2012 API blog posts had some fantastic reusable samples.  - Shai Raiten [Microsoft ALM MVP] – Great collection of posts on TFS API. Enjoy!

    Read the article

  • Hype and LINQ

    - by Tony Davis
    "Tired of querying in antiquated SQL?" I blinked in astonishment when I saw this headline on the LinqPad site. Warming to its theme, the site suggests that what we need is to "kiss goodbye to SSMS", and instead use LINQ, a modern query language! Elsewhere, there is an article entitled "Why LINQ beats SQL". The designers of LINQ, along with many DBAs, would, I'm sure, cringe with embarrassment at the suggestion that LINQ and SQL are, in any sense, competitive ways of doing the same thing. In fact what LINQ really is, at last, is an efficient, declarative language for C# and VB programmers to access or manipulate data in objects, local data stores, ORMs, web services, data repositories, and, yes, even relational databases. The fact is that LINQ is essentially declarative programming in a .NET language, and so in many ways encourages developers into a "SQL-like" mindset, even though they are not directly writing SQL. In place of imperative logic and loops, it uses various expressions, operators and declarative logic to build up an "expression tree" describing only what data is required, not the operations to be performed to get it. This expression tree is then parsed by the language compiler, and the result, when used against a relational database, is a SQL string that, while perhaps not always perfect, is often correctly parameterized and certainly no less "optimal" than what is achieved when a developer applies blunt, imperative logic to the SQL language. From a developer standpoint, it is a mistake to consider LINQ simply as a substitute means of querying SQL Server. The strength of LINQ is that that can be used to access any data source, for which a LINQ provider exists. Microsoft supplies built-in providers to access not just SQL Server, but also XML documents, .NET objects, ADO.NET datasets, and Entity Framework elements. LINQ-to-Objects is particularly interesting in that it allows a declarative means to access and manipulate arrays, collections and so on. Furthermore, as Michael Sorens points out in his excellent article on LINQ, there a whole host of third-party LINQ providers, that offers a simple way to get at data in Excel, Google, Flickr and much more, without having to learn a new interface or language. Of course, the need to be generic enough to deal with a range of data sources, from something as mundane as a text file to as esoteric as a relational database, means that LINQ is a compromise and so has inherent limitations. However, it is a powerful and beautifully compact language and one that, at least in its "query syntax" guise, is accessible to developers and DBAs alike. Perhaps there is still hope that LINQ can fulfill Phil Factor's lobster-induced fantasy of a language that will allow us to "treat all data objects, whether Word files, Excel files, XML, relational databases, text files, HTML files, registry files, LDAPs, Outlook and so on, in the same logical way, as linked databases, and extract the metadata, create the entities and relationships in the same way, and use the same SQL syntax to interrogate, create, read, write and update them." Cheers, Tony.

    Read the article

  • TDD vs. Productivity

    - by Nairou
    In my current project (a game, in C++), I decided that I would use Test Driven Development 100% during development. In terms of code quality, this has been great. My code has never been so well designed or so bug-free. I don't cringe when viewing code I wrote a year ago at the start of the project, and I have gained a much better sense for how to structure things, not only to be more easily testable, but to be simpler to implement and use. However... it has been a year since I started the project. Granted, I can only work on it in my spare time, but TDD is still slowing me down considerably compared to what I'm used to. I read that the slower development speed gets better over time, and I definitely do think up tests a lot more easily than I used to, but I've been at it for a year now and I'm still working at a snail's pace. Each time I think about the next step that needs work, I have to stop every time and think about how I would write a test for it, to allow me to write the actual code. I'll sometimes get stuck for hours, knowing exactly what code I want to write, but not knowing how to break it down finely enough to fully cover it with tests. Other times, I'll quickly think up a dozen tests, and spend an hour writing tests to cover a tiny piece of real code that would have otherwise taken a few minutes to write. Or, after finishing the 50th test to cover a particular entity in the game and all aspects of it's creation and usage, I look at my to-do list and see the next entity to be coded, and cringe in horror at the thought of writing another 50 similar tests to get it implemented. It's gotten to the point that, looking over the progress of the last year, I'm considering abandoning TDD for the sake of "getting the damn project finished". However, giving up the code quality that came with it is not something I'm looking forward to. I'm afraid that if I stop writing tests, then I'll slip out of the habit of making the code so modular and testable. Am I perhaps doing something wrong to still be so slow at this? Are there alternatives that speed up productivity without completely losing the benefits? TAD? Less test coverage? How do other people survive TDD without killing all productivity and motivation?

    Read the article

  • Crash Report in Ubuntu... hardware problem?

    - by Andrew
    Got this on my machine. I was just browsing the web on Chrome and my computer froze. I recently just built this machine. I have a feeling it is a hardware problem... Possibly one of my parts arrived broken in some way.... Starting anac(h)ronistic cron Stopping anac(h)ronistic cron Stopping cold plug devices Stopping log initial device creation Starting enable remaining boot-time encrypted block devices Starting configure network device security Starting configure virtual network devices Starting save udev log and update rules Stopping configure virtual network devices Stopping save udev log and update rules Checking battery state... Stopping System V runlevel compatibility Stopping enable remaining boot-time encrypted block devices Stopping Mount filesystems on boot 91.573384] BUG: unable to handle kernel NULL pointer dereference at (null) 91.573437] IP: [<ffffffff81313514>] strcmp+0x14/0x30 91.573470] PGD 1f7822067 PUD 1ed7a6067 PMD 0 91.573498] Oops: 0000 [#1] SMP 91.573519] CPU 3 91.573531] Modules linked in: dm_crypt bnep snd_hda_codec_realtek rfcomm bluetooth parport_pc ppdev arc4 fglrx(P) rt2800usb rt2800lib crc_ccitt rt2x00usb rt2x00lib mac0021 cfg80211 psmouse snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi snd_rawmidi snd_seq_midi_event snd_seq snd_timer send_seq_device snd joydev mac_hid mei(C) soundcore serio_raw snd_page_alloc lp parport ses enclosure usbhid hid i915 drm_kms_helper drm i2c_algo_bit mxm_umi tg_video wmi usb_storage 91.573826] 91.573837] Pid: 2297, comm: update-notifier Tainted: P C O 3.2.0-29-generic #46-Ubuntu To Be Filled By O.E.M. To Be Filled By O.E.M./Z77 Extreme4 91.573912] RIP: 0010:[<ffffffff81313514>] [<ffffffff81313514>] strcmp+0x14/0x30 91.573954] RSP: 0018:ffff8801f83f5bb8 EFLAGS: 00010246 91.573982] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 91.574019] RDX: 0000000000000069 RSI: 0000000000000000 RDI: ffff88021adb26f8 91.574056] RBP: ffff8801f83f5bb8 R08: ffff88022f2d6e80 R09: 0000000000000000 91.574093] R10: ffff88021e7dbf00 R11: 0000000000000003 R12: ffff88021c10eb40 91.574130] R13: 0000000000000000 R14: ffff88021adb26f8 R15: ffff8801f83f5d40 91.574168] FS: 00007f958cf53940(0000) GS:ffff88022f2c0000(0000) kn1GS:0000000000000000 91.574210] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 91.574240] CR2: 0000000000000000 CR3: 000000021f6d7000 CR4: 00000000000406e0 91.574277] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 91.574314] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000000 91.574351] Process update-notifier (pid: 2297, threadinfo ffff801f83f4000, task ffff880208fe2e00) 91.574397] Stack: 91.574409] ffff8801f83f5be8 ffffffff811ed509 ffff88021adb26c0 ffff88021b8b7020 91.574453] ffff88021b461c60 fffffffffffffffe ffff8801f83f5c18 ffffffff811ed61f 91.574496] ffff88021adb26c0 ffff88021b8b7020 ffff8801f83f5dc8 0000000000000001 91.574539] Call Trace: 91.574558] [<ffffffff811ed509] sysfs_find_dirent+0x59/0x110 91.574591] [<ffffffff811ed61f] sysfs_lookup+0x5f/0x110 91.574621] [<ffffffff81182745] d_alloc_and_lookup+0x45/0x90 91.574654] [<ffffffff8118fe65] ? d_lookup+0x35/0x60 91.574683] [<ffffffff811848d2] do_lookup+0x202/0x310 91.574712] [<ffffffff8118660c] path_lookupat+0x11c/0x750 91.574744] [<ffffffff81318db7] ? __strncpy_from_user+0x27/0x60 91.574778] [<ffffffff81186c71] do_path_lookup+0x31/0xc0 91.574809] [<ffffffff81187779] user_path_at_empty+0x59/0xa0 91.574842] [<ffffffff81187822] ? do_filp_open+0x42/0xa0 91.574872] [<ffffffff811877d1] user_path_at+0x11/0x20 91.574902] [<ffffffff8117c80a] vfs_fstatat+0x3a/0x70 91.574933] [<ffffffff81161cff] ? kmem_cache_free+0x2f/0x110 91.574965] [<ffffffff8117c85e] vfs_lstat+-x31/0x70 91.574993] [<ffffffff8117c9fa] sys_newlstat+0x1a/0x40 91.575022] [<ffffffff81176ee1] ? do_sys_open+0x171/0x220 91.575053] [<ffffffff8117cb1a] ? sys_readlinkat+0x7a/0xb0 91.575086] [<ffffffff81661ec2] system_call_fastpath+0x16/0x1b 91.575118] Code: 83 c1 01 40 84 ff 75 ef 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 00 55 31 c0 48 89 e5 66 2e 0f 1f 84 00 00 00 00 00 0f b6 14 07 <3a> 14 06 75 0f 48 83 c0 01 84 d2 75 ef 31 c0 5d c3 0f 1f 00 19 91.577243] RIP [<ffffffff81313514>] strcmp+0x14/0x30 91.579314] RSP <ffff8801f83f5bb8> 91.581385] CR2: 0000000000000000

    Read the article

  • Windows Azure Training Kit October 2012 Release

    - by Clint Edmonson
    The Windows Azure Technical Evangelism team have been busy bees lately and we want to share with you what they’ve been working on. As you know we release the Windows Azure Training Kit on a regular cadence, so I’m pleased to announce the Windows Azure Training Kit October 2012 Release. This update of the training kit includes 47 hands-on labs, 24 demos and 38 presentations designed to help you learn how to build applications that use Windows Azure services, including updated hands-on labs to use the latest version of Visual Studio 2012 and Windows 8, new demos and presentations. Essential Links: Windows Azure Training Kit Download Windows Azure Training Kit Github [Issues] Updated Presentations With Speaker Notes Your voices were heard loud and clear! I am excited to announce Speaker Notes have been added to a the majority of the content we have available. Find the new updated decks which contain speaker notes below: Foundation SQL Federation Virtual Machine Overview Virtual Networks Windows 8 and Windows Azure Web Sites Windows Azure Cloud Services Windows Azure Overview Windows Azure Service Bus Deploying Active Directory Building Apps With IaaS and PaaS Identity and Access Control Linux Virtual Machines Managing Virtual Machines PowerShell Migrating Apps and Workloads Scalable Global and Highly Available Apps Security and Identity SQL Database SQL Database Migration Cloud Service Life Cycle DevCamps Cloud Services iOS, Android and Windows Azure Windows 8 and Windows Azure Web Sites Windows 8 and Windows Azure Mobile Services Added Localized Content Due to the excitement in the community surrounding the mobile services launch, it was apparent that we needed to make localized content available to continue to deliver the exciting message around Windows Azure Mobile Services. Localized content is available in the following languages: French Japanese German Chinese (Taiwan) Spanish Italian Korean Portuguese (Brazilian) Russian Updated Hands-On Labs To support those who have upgraded to Visual Studio 2012 or those trying out the Visual Studio 2012 Express Editions, we have made sure that the content is available and supported (selected labs only) in Visual Studio 2012 Express and up. Visual Studio 2012 Windows Azure Traffic Manager Introduction to Cloud Services Service Bus Messaging Introduction to Access Control Service This adds a significant amount of additional content, so we have revamped the Hands-On Lab Navigation page to include subsections for Visual Studio 2012 Labs, Visual Studio 2010 Labs, Open Source Labs, Scenario Labs, All Labs. Added Demos Demos are available for a number of presentations which are available in Foundation, DevCamp, ITPro Event & Device + Service DevCamps. You can browse through the demos on the respective Demo Navigation page or on Github (links provided in Demo listing below). HelloASP Connecting Cloud Services Service Bus Relay Windows 8 and Mobile Services URL Shortener iOS Client Migrating a Web Farm Deploying Active Directory URL Shortener Service  (PHP) Geo-Location Service (PHP) Geo-Location Android Client Getting Started with VMs Load Balancing Availability Deploying Hybrid Apps Migrate VM AppController Geo-Location iOS Client Scale Up/Down Using CSUpload URL Shortener Android Client Imaging Virtual Machines The Windows Azure Training Kit is open source and available on GitHub, enabling you in the community to Report Issues or Fork and either extend the solution or commit bug fixes back to the Training Kit. You can find out more details about  the training kit from our GitHub Page including guidelines on how to commit back to the project. Stay tuned to my twitter feed for Windows Azure and other Microsoft announcements, updates, and links: @clinted

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >