Search Results

Search found 5267 results on 211 pages for 'use cases'.

Page 128/211 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Advice Needed: Developers blocked by waiting on code to merge from another branch using GitFlow

    - by fogwolf
    Our team just made the switch from FogBugz & Kiln/Mercurial to Jira & Stash/Git. We are using the Git Flow model for branching, adding subtask branches off of feature branches (relating to Jira subtasks of Jira features). We are using Stash to assign a reviewer when we create a pull request to merge back into the parent branch (usually develop but for subtasks back into the feature branch). The problem we're finding is that even with the best planning and breakdown of feature cases, when multiple developers are working together on the same feature, say on the front-end and back-end, if they are working on interdependent code that is in separate branches one developer ends up blocking the other. We've tried pulling between each others' branches as we develop. We've also tried creating local integration branches each developer can pull from multiple branches to test the integration as they develop. Finally, and this seems to work possibly the best for us so far, though with a bit more overhead, we have tried creating an integration branch off of the feature branch right off the bat. When a subtask branch (off of the feature branch) is ready for a pull request and code review, we also manually merge those change sets into this feature integration branch. Then all interested developers are able to pull from that integration branch into other dependent subtask branches. This prevents anyone from waiting for any branch they are dependent upon to pass code review. I know this isn't necessarily a Git issue - it has to do with working on interdependent code in multiple branches, mixed with our own work process and culture. If we didn't have the strict code-review policy for develop (true integration branch) then developer 1 could merge to develop for developer 2 to pull from. Another complication is that we are also required to do some preliminary testing as part of the code review process before handing the feature off to QA.This means that even if front-end developer 1 is pulling directly from back-end developer 2's branch as they go, if back-end developer 2 finishes and his/her pull request is sitting in code review for a week, then front-end developer 2 technically can't create his pull request/code review because his/her code reviewer can't test because back-end developer 2's code hasn't been merged into develop yet. Bottom line is we're finding ourselves in a much more serial rather than parallel approach in these instance, depending on which route we go, and would like to find a process to use to avoid this. Last thing I'll mention is we realize by sharing code across branches that haven't been code reviewed and finalized yet we are in essence using the beta code of others. To a certain extent I don't think we can avoid that and are willing to accept that to a degree. Anyway, any ideas, input, etc... greatly appreciated. Thanks!

    Read the article

  • Zukunftsmusik auf der Oracle OpenWorld 2013

    - by Alliances & Channels Redaktion
    "The future begins at Oracle OpenWorld", das Motto weckt große Erwartungen! Wie die Zukunft aussehen könnte, davon konnten sich 60.000 Besucherinnen und Besucher aus 145 Ländern vor Ort in San Francisco selbst überzeugen: In sage und schreibe 2.555 Sessions – verteilt über Downtown San Francisco – ging es dort um Zukunftstechnologien und neue Entwicklungen. Wie soll man zusammenfassen, was insgesamt 3.599 Speaker, fast die Hälfte übrigens Kunden und Partner, in vier Tagen an technologischen Visionen entwickelt und präsentiert haben? Nehmen wir ein konkretes Beispiel, das in diversen Sessions immer wieder auftauchte: Das „Internet of Things“, sprich „intelligente“ Alltagsgegenstände, deren eingebaute Minicomputer ohne den Umweg über einen PC miteinander kommunizieren und auf äußere Einflüsse reagieren. Für viele ist das heute noch Neuland, doch die Weiterentwicklung des Internet of Things eröffnet für Oracle, wie auch für die Partner, ein spannendes Arbeitsfeld und natürlich auch einen neuen Markt. Die omnipräsenten Fokus-Themen der viertägigen größten Hauskonferenz von Oracle hießen in diesem Jahr Customer Experience und Human Capital Management. Spannend für Partner waren auch die Strategien und die Roadmap von Oracle sowie die Neuigkeiten aus den Bereichen Engineered Systems, Cloud Computing, Business Analytics, Big Data und Customer Experience. Neue Rekorde stellte die Oracle OpenWorld auch im Netz auf: Mehr als 2,1 Millionen Menschen besuchten diese Veranstaltung online und nutzten dabei über 224 Social-Media Kanäle – fast doppelt so viele wie noch vor einem Jahr. Die gute Nachricht: Die Oracle OpenWorld bleibt online, denn es besteht nach wie vor die Möglichkeit, OnDemand-Videos der Keynote- und Session-Highlights anzusehen: Gehen Sie einfach auf Conference Video Highlights  und wählen Sie aus acht Bereichen entweder eine Zusammenfassung oder die vollständige Keynote beziehungsweise Session. Dort finden Sie auch Videos der eigenen Fach-Konferenzen, die im Umfeld der Oracle OpenWorld stattfanden: die JavaOne, die MySQL Connect und der Oracle PartnerNetwork Exchange. Beim Oracle PartnerNetwork Exchange wurden, ganz auf die Fragen und Bedürfnisse der Oracle Partner zugeschnitten, Themen wie Cloud für Partner, Applications, Engineered Systems und Hardware, Big Data, oder Industry Solutions behandelt, und es gab, ganz wichtig, viel Gelegenheit zu Austausch und Vernetzung. Konkret befassten sich dort beispielsweise Sessions mit Cloudanwendungen im Gesundheitsbereich, mit der Erstellung überzeugender Business Cases für Kundengespräche oder mit Mobile und Social Networking. Die aus Deutschland angereisten über 40 Partner trafen sich beim OPN Exchange zu einem anregenden gemeinsamen Abend mit den anderen Teilnehmern. Dass die Oracle OpenWorld auch noch zum sportlichen Highlight werden würde, kam denkbar unerwartet: Zeitgleich mit der Konferenz wurde nämlich in der Bucht von San Francisco die entscheidende 19. Etappe des Americas Cup ausgetragen. Im traditionsreichen Segelwettbewerb lag Team Oracle USA zunächst mit 1:8 zurück, schaffte es aber dennoch, den Sieg vor dem lange Zeit überlegenen Team Neuseeland zu holen und somit den Titel zu verteidigen. Selbstverständlich fand die Oracle OpenWorld auch ein großes Medienecho. Wir haben eine Auswahl für Sie zusammengestellt: - ChannelPartner- Computerwoche - Heise - Silicon über Big Data - Silicon über 12c

    Read the article

  • Zukunftsmusik auf der Oracle OpenWorld 2013

    - by Alliances & Channels Redaktion
    "The future begins at Oracle OpenWorld", das Motto weckt große Erwartungen! Wie die Zukunft aussehen könnte, davon konnten sich 60.000 Besucherinnen und Besucher aus 145 Ländern vor Ort in San Francisco selbst überzeugen: In sage und schreibe 2.555 Sessions – verteilt über Downtown San Francisco – ging es dort um Zukunftstechnologien und neue Entwicklungen. Wie soll man zusammenfassen, was insgesamt 3.599 Speaker, fast die Hälfte übrigens Kunden und Partner, in vier Tagen an technologischen Visionen entwickelt und präsentiert haben? Nehmen wir ein konkretes Beispiel, das in diversen Sessions immer wieder auftauchte: Das „Internet of Things“, sprich „intelligente“ Alltagsgegenstände, deren eingebaute Minicomputer ohne den Umweg über einen PC miteinander kommunizieren und auf äußere Einflüsse reagieren. Für viele ist das heute noch Neuland, doch die Weiterentwicklung des Internet of Things eröffnet für Oracle, wie auch für die Partner, ein spannendes Arbeitsfeld und natürlich auch einen neuen Markt. Die omnipräsenten Fokus-Themen der viertägigen größten Hauskonferenz von Oracle hießen in diesem Jahr Customer Experience und Human Capital Management. Spannend für Partner waren auch die Strategien und die Roadmap von Oracle sowie die Neuigkeiten aus den Bereichen Engineered Systems, Cloud Computing, Business Analytics, Big Data und Customer Experience. Neue Rekorde stellte die Oracle OpenWorld auch im Netz auf: Mehr als 2,1 Millionen Menschen besuchten diese Veranstaltung online und nutzten dabei über 224 Social-Media Kanäle – fast doppelt so viele wie noch vor einem Jahr. Die gute Nachricht: Die Oracle OpenWorld bleibt online, denn es besteht nach wie vor die Möglichkeit, OnDemand-Videos der Keynote- und Session-Highlights anzusehen: Gehen Sie einfach auf Conference Video Highlights und wählen Sie aus acht Bereichen entweder eine Zusammenfassung oder die vollständige Keynote beziehungsweise Session. Dort finden Sie auch Videos der eigenen Fach-Konferenzen, die im Umfeld der Oracle OpenWorld stattfanden: die JavaOne, die MySQL Connect und der Oracle PartnerNetwork Exchange. Beim Oracle PartnerNetwork Exchange wurden, ganz auf die Fragen und Bedürfnisse der Oracle Partner zugeschnitten, Themen wie Cloud für Partner, Applications, Engineered Systems und Hardware, Big Data, oder Industry Solutions behandelt, und es gab, ganz wichtig, viel Gelegenheit zu Austausch und Vernetzung. Konkret befassten sich dort beispielsweise Sessions mit Cloudanwendungen im Gesundheitsbereich, mit der Erstellung überzeugender Business Cases für Kundengespräche oder mit Mobile und Social Networking. Die aus Deutschland angereisten über 40 Partner trafen sich beim OPN Exchange zu einem anregenden gemeinsamen Abend mit den anderen Teilnehmern. Dass die Oracle OpenWorld auch noch zum sportlichen Highlight werden würde, kam denkbar unerwartet: Zeitgleich mit der Konferenz wurde nämlich in der Bucht von San Francisco die entscheidende 19. Etappe des Americas Cup ausgetragen. Im traditionsreichen Segelwettbewerb lag Team Oracle USA zunächst mit 1:8 zurück, schaffte es aber dennoch, den Sieg vor dem lange Zeit überlegenen Team Neuseeland zu holen und somit den Titel zu verteidigen. Selbstverständlich fand die Oracle OpenWorld auch ein großes Medienecho. Wir haben eine Auswahl für Sie zusammengestellt: - ChannelPartner- Computerwoche - Heise - Silicon über Big Data - Silicon über 12c

    Read the article

  • MDM Poised for Growth

    - by david.butler(at)oracle.com
    David Nixon, an Oracle colleague of mine, was doing some research on MDM the other day. He came up with some well founded insights that I thought I’d share with you. Gartner recently published a note asking “Should Organizations Using ERP 'Do' Master Data Management?”  It may seem a bit strange but that’s a question Gartner has been asked by a number of companies as organizations are beginning to understand the importance of data governance and data stewardship.  That’s because ERP Suites typically “focus on integrating their own applications within suites, but have little interest in making their suites interoperate with the applications or suites of other vendors.”  Therefore, Gartner is advising customers that “have deployed or plan to support multiple packaged application suites (even from the same vendor) that have different semantic data and/or process models” to add an MDM solution. And it appears that customers are taking note.  In a more recent note entitled “Search Analytics Trends: Master Data Management”, Gartner noted that MDM searches on gartner.com in November 2010 “were 300% higher than [in] May 2009, indicating the increased interest an importance that businesses are placing on MDM.”  Why the increased interest?  Moving towards a single version of the truth is a familiar theme, but customers are talking more about the underlying business value that this enables.  For example, businesses are talking about the need to fix master data before they can successfully move forward on SOA initiatives.  And the growing demands for compliance continue to be a major driver.  In short, companies are talking more about specific and tangible business value, and they are looking for help creating business cases for an MDM initiative. Why This Matters Gartner’s notes make three things clear.  First, MDM is poised for growth as organizations gain a greater understanding for it and the need they have.  Many are still sorting it out, but the demand is growing and is sure to rise.  Second, any organization with a heterogeneous computing environment should invest in MDM.  Even solutions from the same vendor may have different data models and could benefit from MDM.  But the key to growth, or which vendors will benefit the most from it, is the third and perhaps most critical point: companies need help with the business case for MDM. Oracle can help your organization build a compelling business case for MDM. We have seen our 1100+ MDM customers gain competitive advantages in a wide variety of implementations. Give us a ring.

    Read the article

  • Create a Self Signed Sertificate on WLS 10.3.5 Supporting SHA 256 Algorthim.

    - by adejuanc
    1) Set domain to call the keytool $. setDomainEnv.sh 2) Generate the key $ keytool -genkey -alias selfsignedcert -keyalg RSA -sigalg SHA256withRSA -keypass privatepassword -keystore identity.jks -storepass password -validity 365 What is your first and last name? [Unknown]: adejuan-desktop.cl.oracle.com What is the name of your organizational unit? [Unknown]: a What is the name of your organization? [Unknown]: e What is the name of your City or Locality? [Unknown]: i What is the name of your State or Province? [Unknown]: o What is the two-letter country code for this unit? [Unknown]: U Is CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U correct? [no]: yes 3) Export the root certificate $ keytool -export -alias selfsignedcert -sigalg SHA256withRSA -file root.cer -keystore identity.jks Enter keystore password: Certificate stored in file <root.cer> 4) Import the root certificate to the trust store $ keytool -import -alias selfsignedcert -sigalg SHA256withRSA -trustcacerts -file root.cer -keystore trust.jks Enter keystore password: Re-enter new password: Owner: CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U Issuer: CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U Serial number: 4f17459a Valid from: Wed Jan 16 15:33:22CLST 2012 until: Thu Jan 15 15:33:22 CLST 2013 Certificate fingerprints: MD5: 7F:08:FA:DE:CD:D5:C3:D3:83:ED:B8:4F:F2:DA:4E:A1 SHA1: 87:E4:7C:B8:D7:1A:90:53:FE:1B:70:B6:32:22:5B:83:29:81:53:4B Signature algorithm name: SHA256withRSA Version: 3 Trust this certificate? [no]: yes Certificate was added to keystore 5) To check the contents of the keystore keytool -v -list -keystore identity.jks Enter keystore password: ***************** WARNING WARNING WARNING ***************** * The integrity of the information stored in your keystore * * has NOT been verified! In order to verify its integrity, * * you must provide your keystore password. * ***************** WARNING WARNING WARNING ***************** Keystore type: JKS Keystore provider: SUN Your keystore contains 1 entry Alias name: selfsignedcert Creation date: Jan 18, 2012 Entry type: PrivateKeyEntry Certificate chain length: 1 Certificate[1]: Owner: CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U Issuer: CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U Serial number: 4f17459a Valid from: Wed Jan 16 15:42:16CLST 2012 until: Thu Jan 15 15:42:16 CLST 2013 Certificate fingerprints: MD5: 7F:08:FA:DE:CD:D5:C3:D3:83:ED:B8:4F:F2:DA:4E:A1 SHA1: 87:E4:7C:B8:D7:1A:90:53:FE:1B:70:B6:32:22:5B:83:29:81:53:4B Signature algorithm name: SHA256withRSA Version: 3 ******************************************* ******************************************* 6) In some cases, this parameter is needed in the server start up parameters. -Dweblogic.ssl.JSSEEnabled=true Otherwise, enable it from the Server configuration -> SSL -> Use JSSE checkbox.

    Read the article

  • PTLQueue : a scalable bounded-capacity MPMC queue

    - by Dave
    Title: Fast concurrent MPMC queue -- I've used the following concurrent queue algorithm enough that it warrants a blog entry. I'll sketch out the design of a fast and scalable multiple-producer multiple-consumer (MPSC) concurrent queue called PTLQueue. The queue has bounded capacity and is implemented via a circular array. Bounded capacity can be a useful property if there's a mismatch between producer rates and consumer rates where an unbounded queue might otherwise result in excessive memory consumption by virtue of the container nodes that -- in some queue implementations -- are used to hold values. A bounded-capacity queue can provide flow control between components. Beware, however, that bounded collections can also result in resource deadlock if abused. The put() and take() operators are partial and wait for the collection to become non-full or non-empty, respectively. Put() and take() do not allocate memory, and are not vulnerable to the ABA pathologies. The PTLQueue algorithm can be implemented equally well in C/C++ and Java. Partial operators are often more convenient than total methods. In many use cases if the preconditions aren't met, there's nothing else useful the thread can do, so it may as well wait via a partial method. An exception is in the case of work-stealing queues where a thief might scan a set of queues from which it could potentially steal. Total methods return ASAP with a success-failure indication. (It's tempting to describe a queue or API as blocking or non-blocking instead of partial or total, but non-blocking is already an overloaded concurrency term. Perhaps waiting/non-waiting or patient/impatient might be better terms). It's also trivial to construct partial operators by busy-waiting via total operators, but such constructs may be less efficient than an operator explicitly and intentionally designed to wait. A PTLQueue instance contains an array of slots, where each slot has volatile Turn and MailBox fields. The array has power-of-two length allowing mod/div operations to be replaced by masking. We assume sensible padding and alignment to reduce the impact of false sharing. (On x86 I recommend 128-byte alignment and padding because of the adjacent-sector prefetch facility). Each queue also has PutCursor and TakeCursor cursor variables, each of which should be sequestered as the sole occupant of a cache line or sector. You can opt to use 64-bit integers if concerned about wrap-around aliasing in the cursor variables. Put(null) is considered illegal, but the caller or implementation can easily check for and convert null to a distinguished non-null proxy value if null happens to be a value you'd like to pass. Take() will accordingly convert the proxy value back to null. An advantage of PTLQueue is that you can use atomic fetch-and-increment for the partial methods. We initialize each slot at index I with (Turn=I, MailBox=null). Both cursors are initially 0. All shared variables are considered "volatile" and atomics such as CAS and AtomicFetchAndIncrement are presumed to have bidirectional fence semantics. Finally T is the templated type. I've sketched out a total tryTake() method below that allows the caller to poll the queue. tryPut() has an analogous construction. Zebra stripping : alternating row colors for nice-looking code listings. See also google code "prettify" : https://code.google.com/p/google-code-prettify/ Prettify is a javascript module that yields the HTML/CSS/JS equivalent of pretty-print. -- pre:nth-child(odd) { background-color:#ff0000; } pre:nth-child(even) { background-color:#0000ff; } border-left: 11px solid #ccc; margin: 1.7em 0 1.7em 0.3em; background-color:#BFB; font-size:12px; line-height:65%; " // PTLQueue : Put(v) : // producer : partial method - waits as necessary assert v != null assert Mask = 1 && (Mask & (Mask+1)) == 0 // Document invariants // doorway step // Obtain a sequence number -- ticket // As a practical concern the ticket value is temporally unique // The ticket also identifies and selects a slot auto tkt = AtomicFetchIncrement (&PutCursor, 1) slot * s = &Slots[tkt & Mask] // waiting phase : // wait for slot's generation to match the tkt value assigned to this put() invocation. // The "generation" is implicitly encoded as the upper bits in the cursor // above those used to specify the index : tkt div (Mask+1) // The generation serves as an epoch number to identify a cohort of threads // accessing disjoint slots while s-Turn != tkt : Pause assert s-MailBox == null s-MailBox = v // deposit and pass message Take() : // consumer : partial method - waits as necessary auto tkt = AtomicFetchIncrement (&TakeCursor,1) slot * s = &Slots[tkt & Mask] // 2-stage waiting : // First wait for turn for our generation // Acquire exclusive "take" access to slot's MailBox field // Then wait for the slot to become occupied while s-Turn != tkt : Pause // Concurrency in this section of code is now reduced to just 1 producer thread // vs 1 consumer thread. // For a given queue and slot, there will be most one Take() operation running // in this section. // Consumer waits for producer to arrive and make slot non-empty // Extract message; clear mailbox; advance Turn indicator // We have an obvious happens-before relation : // Put(m) happens-before corresponding Take() that returns that same "m" for T v = s-MailBox if v != null : s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 // unlock slot to admit next producer and consumer return v Pause tryTake() : // total method - returns ASAP with failure indication for auto tkt = TakeCursor slot * s = &Slots[tkt & Mask] if s-Turn != tkt : return null T v = s-MailBox // presumptive return value if v == null : return null // ratify tkt and v values and commit by advancing cursor if CAS (&TakeCursor, tkt, tkt+1) != tkt : continue s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 return v The basic idea derives from the Partitioned Ticket Lock "PTL" (US20120240126-A1) and the MultiLane Concurrent Bag (US8689237). The latter is essentially a circular ring-buffer where the elements themselves are queues or concurrent collections. You can think of the PTLQueue as a partitioned ticket lock "PTL" augmented to pass values from lock to unlock via the slots. Alternatively, you could conceptualize of PTLQueue as a degenerate MultiLane bag where each slot or "lane" consists of a simple single-word MailBox instead of a general queue. Each lane in PTLQueue also has a private Turn field which acts like the Turn (Grant) variables found in PTL. Turn enforces strict FIFO ordering and restricts concurrency on the slot mailbox field to at most one simultaneous put() and take() operation. PTL uses a single "ticket" variable and per-slot Turn (grant) fields while MultiLane has distinct PutCursor and TakeCursor cursors and abstract per-slot sub-queues. Both PTL and MultiLane advance their cursor and ticket variables with atomic fetch-and-increment. PTLQueue borrows from both PTL and MultiLane and has distinct put and take cursors and per-slot Turn fields. Instead of a per-slot queues, PTLQueue uses a simple single-word MailBox field. PutCursor and TakeCursor act like a pair of ticket locks, conferring "put" and "take" access to a given slot. PutCursor, for instance, assigns an incoming put() request to a slot and serves as a PTL "Ticket" to acquire "put" permission to that slot's MailBox field. To better explain the operation of PTLQueue we deconstruct the operation of put() and take() as follows. Put() first increments PutCursor obtaining a new unique ticket. That ticket value also identifies a slot. Put() next waits for that slot's Turn field to match that ticket value. This is tantamount to using a PTL to acquire "put" permission on the slot's MailBox field. Finally, having obtained exclusive "put" permission on the slot, put() stores the message value into the slot's MailBox. Take() similarly advances TakeCursor, identifying a slot, and then acquires and secures "take" permission on a slot by waiting for Turn. Take() then waits for the slot's MailBox to become non-empty, extracts the message, and clears MailBox. Finally, take() advances the slot's Turn field, which releases both "put" and "take" access to the slot's MailBox. Note the asymmetry : put() acquires "put" access to the slot, but take() releases that lock. At any given time, for a given slot in a PTLQueue, at most one thread has "put" access and at most one thread has "take" access. This restricts concurrency from general MPMC to 1-vs-1. We have 2 ticket locks -- one for put() and one for take() -- each with its own "ticket" variable in the form of the corresponding cursor, but they share a single "Grant" egress variable in the form of the slot's Turn variable. Advancing the PutCursor, for instance, serves two purposes. First, we obtain a unique ticket which identifies a slot. Second, incrementing the cursor is the doorway protocol step to acquire the per-slot mutual exclusion "put" lock. The cursors and operations to increment those cursors serve double-duty : slot-selection and ticket assignment for locking the slot's MailBox field. At any given time a slot MailBox field can be in one of the following states: empty with no pending operations -- neutral state; empty with one or more waiting take() operations pending -- deficit; occupied with no pending operations; occupied with one or more waiting put() operations -- surplus; empty with a pending put() or pending put() and take() operations -- transitional; or occupied with a pending take() or pending put() and take() operations -- transitional. The partial put() and take() operators can be implemented with an atomic fetch-and-increment operation, which may confer a performance advantage over a CAS-based loop. In addition we have independent PutCursor and TakeCursor cursors. Critically, a put() operation modifies PutCursor but does not access the TakeCursor and a take() operation modifies the TakeCursor cursor but does not access the PutCursor. This acts to reduce coherence traffic relative to some other queue designs. It's worth noting that slow threads or obstruction in one slot (or "lane") does not impede or obstruct operations in other slots -- this gives us some degree of obstruction isolation. PTLQueue is not lock-free, however. The implementation above is expressed with polite busy-waiting (Pause) but it's trivial to implement per-slot parking and unparking to deschedule waiting threads. It's also easy to convert the queue to a more general deque by replacing the PutCursor and TakeCursor cursors with Left/Front and Right/Back cursors that can move either direction. Specifically, to push and pop from the "left" side of the deque we would decrement and increment the Left cursor, respectively, and to push and pop from the "right" side of the deque we would increment and decrement the Right cursor, respectively. We used a variation of PTLQueue for message passing in our recent OPODIS 2013 paper. ul { list-style:none; padding-left:0; padding:0; margin:0; margin-left:0; } ul#myTagID { padding: 0px; margin: 0px; list-style:none; margin-left:0;} -- -- There's quite a bit of related literature in this area. I'll call out a few relevant references: Wilson's NYU Courant Institute UltraComputer dissertation from 1988 is classic and the canonical starting point : Operating System Data Structures for Shared-Memory MIMD Machines with Fetch-and-Add. Regarding provenance and priority, I think PTLQueue or queues effectively equivalent to PTLQueue have been independently rediscovered a number of times. See CB-Queue and BNPBV, below, for instance. But Wilson's dissertation anticipates the basic idea and seems to predate all the others. Gottlieb et al : Basic Techniques for the Efficient Coordination of Very Large Numbers of Cooperating Sequential Processors Orozco et al : CB-Queue in Toward high-throughput algorithms on many-core architectures which appeared in TACO 2012. Meneghin et al : BNPVB family in Performance evaluation of inter-thread communication mechanisms on multicore/multithreaded architecture Dmitry Vyukov : bounded MPMC queue (highly recommended) Alex Otenko : US8607249 (highly related). John Mellor-Crummey : Concurrent queues: Practical fetch-and-phi algorithms. Technical Report 229, Department of Computer Science, University of Rochester Thomasson : FIFO Distributed Bakery Algorithm (very similar to PTLQueue). Scott and Scherer : Dual Data Structures I'll propose an optimization left as an exercise for the reader. Say we wanted to reduce memory usage by eliminating inter-slot padding. Such padding is usually "dark" memory and otherwise unused and wasted. But eliminating the padding leaves us at risk of increased false sharing. Furthermore lets say it was usually the case that the PutCursor and TakeCursor were numerically close to each other. (That's true in some use cases). We might still reduce false sharing by incrementing the cursors by some value other than 1 that is not trivially small and is coprime with the number of slots. Alternatively, we might increment the cursor by one and mask as usual, resulting in a logical index. We then use that logical index value to index into a permutation table, yielding an effective index for use in the slot array. The permutation table would be constructed so that nearby logical indices would map to more distant effective indices. (Open question: what should that permutation look like? Possibly some perversion of a Gray code or De Bruijn sequence might be suitable). As an aside, say we need to busy-wait for some condition as follows : "while C == 0 : Pause". Lets say that C is usually non-zero, so we typically don't wait. But when C happens to be 0 we'll have to spin for some period, possibly brief. We can arrange for the code to be more machine-friendly with respect to the branch predictors by transforming the loop into : "if C == 0 : for { Pause; if C != 0 : break; }". Critically, we want to restructure the loop so there's one branch that controls entry and another that controls loop exit. A concern is that your compiler or JIT might be clever enough to transform this back to "while C == 0 : Pause". You can sometimes avoid this by inserting a call to a some type of very cheap "opaque" method that the compiler can't elide or reorder. On Solaris, for instance, you could use :"if C == 0 : { gethrtime(); for { Pause; if C != 0 : break; }}". It's worth noting the obvious duality between locks and queues. If you have strict FIFO lock implementation with local spinning and succession by direct handoff such as MCS or CLH,then you can usually transform that lock into a queue. Hidden commentary and annotations - invisible : * And of course there's a well-known duality between queues and locks, but I'll leave that topic for another blog post. * Compare and contrast : PTLQ vs PTL and MultiLane * Equivalent : Turn; seq; sequence; pos; position; ticket * Put = Lock; Deposit Take = identify and reserve slot; wait; extract & clear; unlock * conceptualize : Distinct PutLock and TakeLock implemented as ticket lock or PTL Distinct arrival cursors but share per-slot "Turn" variable provides exclusive role-based access to slot's mailbox field put() acquires exclusive access to a slot for purposes of "deposit" assigns slot round-robin and then acquires deposit access rights/perms to that slot take() acquires exclusive access to slot for purposes of "withdrawal" assigns slot round-robin and then acquires withdrawal access rights/perms to that slot At any given time, only one thread can have withdrawal access to a slot at any given time, only one thread can have deposit access to a slot Permissible for T1 to have deposit access and T2 to simultaneously have withdrawal access * round-robin for the purposes of; role-based; access mode; access role mailslot; mailbox; allocate/assign/identify slot rights; permission; license; access permission; * PTL/Ticket hybrid Asymmetric usage ; owner oblivious lock-unlock pairing K-exclusion add Grant cursor pass message m from lock to unlock via Slots[] array Cursor performs 2 functions : + PTL ticket + Assigns request to slot in round-robin fashion Deconstruct protocol : explication put() : allocate slot in round-robin fashion acquire PTL for "put" access store message into slot associated with PTL index take() : Acquire PTL for "take" access // doorway step seq = fetchAdd (&Grant, 1) s = &Slots[seq & Mask] // waiting phase while s-Turn != seq : pause Extract : wait for s-mailbox to be full v = s-mailbox s-mailbox = null Release PTL for both "put" and "take" access s-Turn = seq + Mask + 1 * Slot round-robin assignment and lock "doorway" protocol leverage the same cursor and FetchAdd operation on that cursor FetchAdd (&Cursor,1) + round-robin slot assignment and dispersal + PTL/ticket lock "doorway" step waiting phase is via "Turn" field in slot * PTLQueue uses 2 cursors -- put and take. Acquire "put" access to slot via PTL-like lock Acquire "take" access to slot via PTL-like lock 2 locks : put and take -- at most one thread can access slot's mailbox Both locks use same "turn" field Like multilane : 2 cursors : put and take slot is simple 1-capacity mailbox instead of queue Borrow per-slot turn/grant from PTL Provides strict FIFO Lock slot : put-vs-put take-vs-take at most one put accesses slot at any one time at most one put accesses take at any one time reduction to 1-vs-1 instead of N-vs-M concurrency Per slot locks for put/take Release put/take by advancing turn * is instrumental in ... * P-V Semaphore vs lock vs K-exclusion * See also : FastQueues-excerpt.java dice-etc/queue-mpmc-bounded-blocking-circular-xadd/ * PTLQueue is the same as PTLQB - identical * Expedient return; ASAP; prompt; immediately * Lamport's Bakery algorithm : doorway step then waiting phase Threads arriving at doorway obtain a unique ticket number Threads enter in ticket order * In the terminology of Reed and Kanodia a ticket lock corresponds to the busy-wait implementation of a semaphore using an eventcount and a sequencer It can also be thought of as an optimization of Lamport's bakery lock was designed for fault-tolerance rather than performance Instead of spinning on the release counter, processors using a bakery lock repeatedly examine the tickets of their peers --

    Read the article

  • Where to draw the line between development-led security and administration-led security?

    - by haylem
    There are cases where you have the opportunity, as a developer, to enforce stricter security features and protections on a software, though they could very well be managed at an environmental level (ie, the operating system would take care of it). Where would you say you draw the line, and what elements do you factor in your decision? Concrete Examples User Management is the OS's responsibility Not exactly meant as a security feature, but in a similar case Google Chrome used to not allow separate profiles. The invoked reason (though it now supports multiple profiles for a same OS user) used to be that user management was the operating system's responsibility. Disabling Web-Form Fields A recurrent request I see addressed online is to have auto-completion be disabled on form fields. Auto-completion didn't exist in old browsers, and was a welcome feature at the time it was introduced for people who needed to fill in forms often. But it also brought in some security concerns, and so some browsers started to implement, on top of the (obviously needed) setting in their own preference/customization panel, an autocomplete attribute for form or input fields. And this has now been introduced into the upcoming HTML5 standard. For browsers who do not listen to this attribute, strange hacks *\ are offered, like generating unique IDs and names for fields to avoid them from being suggested in future forms (which comes with another herd of issues, like polluting your local auto-fill cache and not preventing a password from being stored in it, but instead probably duplicating its occurences). In this particular case, and others, I'd argue that this is a user setting and that it's the user's desire and the user's responsibility to enable or disable auto-fill (by disabling the feature altogether). And if it is based on an internal policy and security requirement in a corporate environment, then substitute the user for the administrator in the above. I assume it could be counter-argued that the user may want to access non-critical applications (or sites) with this handy feature enabled, and critical applications with this feature disabled. But then I'd think that's what security zones are for (in some browsers), or the sign that you need a more secure (and dedicated) environment / account to use these applications. * I obviously don't deny the ingenuity of the people who were forced to find workarounds, just the necessity of said workarounds. Questions That was a tad long-winded, so I guess my questions are: Would you in general consider it to be the application's (hence, the developer's) responsiblity? Where do you draw the line, if not in the "general" case?

    Read the article

  • SQL SERVER – Simple Explanation and Puzzle with SOUNDEX Function and DIFFERENCE Function

    - by pinaldave
    Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: A Puzzle – Swap Value of Column Without Case Statement,there were more than 50 solutions proposed in the comment. There were many creative solutions. I have mentioned my personal favorite (different ones) here: Solution of Puzzle – Swap Value of Column Without Case Statement. However, I received lots of questions regarding one of the Solution by SIJIN KUMAR V P. He has used the function SOUNDEX in his solution. The request was to explain how SOUNDEX and DIFFERENCE works. Well, there are pretty decent documentations provided over here SOUNDEX function and DIFFERENCE over on MSDN and if I attempt to explain this function I will end up writing the same details which are available on MSDN. Instead of writing theory, we will try to learn this function by using a couple of simple puzzles. You try to solve the puzzles using the MSDN and see if you can learn something very quickly. In simple words - SOUNDEX converts an alphanumeric string to a four-character code to find similar-sounding words or names. The first character of the code is the first character of character_expression and the second through fourth characters of the code are numbers that represent the letters in the expression. Vowels incharacter_expression are ignored unless they are the first letter of the string. DIFFERENCE function returns an integer value. The  integer returned is the number of characters in the SOUNDEX values that are the same. The return value ranges from 0 through 4: 0 indicates weak or no similarity, and 4 indicates strong similarity or the same values. Learning Puzzle 1: Now let us run following four queries and observe its output. SELECT SOUNDEX('SQLAuthority') SdxValue SELECT SOUNDEX('SLTR') SdxValue SELECT SOUNDEX('SaLaTaRa') SdxValue SELECT SOUNDEX('SaLaTaRaM') SdxValue When you look at the result set all the four values are same. The reason for all the values to be same is as for SQL Server SOUNDEX function all the four strings are similarly sounding string. Learning Puzzle 2: Now let us run following five queries and observe its output. SELECT DIFFERENCE (SOUNDEX('SLTR'),SOUNDEX('SQLAuthority')) SELECT DIFFERENCE (SOUNDEX('TH'),SOUNDEX('SQLAuthority')) SELECT DIFFERENCE ('SQLAuthority',SOUNDEX('SQLAuthority')) SELECT DIFFERENCE ('SLTR',SOUNDEX('SQLAuthority')) SELECT DIFFERENCE ('SLTR','SQLAuthority') When you look at the result set you will get the result in the ranges from 1 to 4. Here is how it works if your result is 0 which means absolutely not relevant to each other and if your result is 1 which means the results are relevant to each other. Have you ever used above two functions in your business need or on production server? If yes, would you please leave a comment with use cases. I believe it will be beneficial to everyone. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How one decision can turn web services to hell

    - by DigiMortal
    In this posting I will show you how one stupid decision may turn developers life to hell. There is a project where bunch of complex applications exchange data frequently and it is very hard to change something without additional expenses. Well, one analyst thought that string is silver bullet of web services. Read what happened. Bad bad mistake In the early stages of integration project there was analyst who also established architecture and technical design for web services. There was one very bad mistake this analyst made: All data must be converted to strings before exchange! Yes, that’s correct, this was the requirement. All integers, decimals and dates are coming in and going out as strings. There was also explanation for this requirement: This way we can avoid data type conversion errors! Well, this guy works somewhere else already and I hope he works in some burger restaurant – far away from computers. Consequences If you first look at this requirement it may seem like little annoying piece of crap you can easily survive. But let’s see the real consequences one stupid decision can cause: hell load of data conversions are done by receiving applications and SSIS packages, SSIS packages are not error prone and they depend heavily on strings they get from different services, there are more than one format per type that is used in different services, for larger amounts of data all these conversion tasks slow down the work of integration packages, practically all developers have been in hurry with some SSIS import tasks and some fields that are not used in different calculations in SSAS cube are imported without data conversions (by example, some prices are strings in format “1.021 $”). The most painful problem for developers is the part of data conversions because they don’t expect that there is such a stupid requirement stated and therefore they are not able to estimate the time their tasks take on these web services. Also developers must be prepared for cases when suddenly some service sends data that is not in acceptable format and they must solve the problems ASAP. This puts unexpected load on developers and they are not very happy with it because they can’t understand why they have to live with this horror if it is possible to fix. What to do if you see something like this? Well, explain the problem to customer and demand special tasks to project schedule to get this mess solved before going on with new developments. It is cheaper to solve the problems now that later.

    Read the article

  • Getting from a user-story to code while using TDD (scrum)

    - by Ittai
    I'm getting into scrum and TDD and I think I have some confusion which I'd like to get your feedback about. Let's assume I have a user-story in my backlog, in order for me to start developing it as part of TDD I need to have requirements, right so far? Is it true to say that the product manager and the QA should be responsible for taking the user-story and breaking it down to acceptance tests? I think the above is true since the acceptance tests need to be formal, so they can be used as tests, but also human readable so that the product can approve they are the requirements, right? Is it also true that I later take these acceptance tests and use them as my requirements, i.e. they are a set of use-cases which I implement (through TDD)? I hope I'm not making too much of a mess but that's the current flow I have in mind right now. Update I think my initial intentions were unclear so I'll try to rephrase. I want to know more details about the scrum flow of turning a user-story into code while using TDD. The starting point is obvious, a user surfaces a need (or the user's representative as the product) which is a short 1-2 lines description in the known format and that is added to the product backlog. When there is a spring planning meeting user-stories are taken from the backlog and assigned to developers. In order for a developer to write code they need requirements (especially in TDD since the requirements are what the tests are derived from). When, by whom and to which format are the requirements compiled? What I had in mind was that the product and QA define the requirements via acceptance tests (I'm thinking of automatic using FitNesse or the sort but that's not the core I think) which help to serve 2 purposes at the same time: They define "Done" properly. They give a developer something to derive tests from. I wasn't sure when these were written (before the sprint they're picked then that might be a waste since additional information will arrive or the story won't be picked, during the iteration then the developer might get stuck waiting for them...)

    Read the article

  • Partner Webcast – More out of ODA with DB Options - 19 July 2012

    - by Thanos
    The Simple, Reliable, Affordable Path to High-Availability Databases Critical business data needs to be available 24/7 for users and customers, but it can be a struggle to find the time and resources to build a highly available database system that’s reliable and affordable. That’s why Oracle created the new Oracle Database Appliance—a complete package of software, server, storage, and networking. The Oracle Database Appliance integrates the world’s most popular database - Oracle Database 11g  - with system software, servers, storage and networking in a single box. Business gets the benefit of a reliable, secure and highly available database to support applications and maintain continuity – as well as groundbreaking ease of use. But that is not all, with the support for all Oracle Database Options, Oracle Database Appliance can be the ideal solution for many use cases. The benefits?   Unmatched performance, reliability & security for your data that’s there when you need it – which is all the time. Fast installation, simple deployment, easy management. Out of the box. Significant cost savings & reduced risk and complexity compared to integrating all the elements yourself. Ongoing lower total cost of ownership with multiple automated support, detection & correction functions that also save you time.   Discover the Oracle Database Appliance Value Proposition and learn how to position and combine it with database options to capture new business and easily roll out solutions safely and with maximum cost efficiency. Agenda: Oracle Database& Engineered Systems Innovation. What’s the Oracle Database Appliance ? Oracle Database Appliance Value Proposition. Oracle Database Appliance with Database Options Oracle Database Appliance Partners Business Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Visit regularly our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies as well as upcoming partner webcasts and events.

    Read the article

  • Will my self-taught code be fine, or should I take it to the professional level?

    - by G1i1ch
    Lately I've been getting professional work, hanging out with other programmers, and making friends in the industry. The only thing is I'm 100% self-taught. It's caused my style to extremely deviate from the style of those that are properly trained. It's the techniques and organization of my code that's different. It's a mixture of several things I do. I tend to blend several programming paradigms together. Like Functional and OO. I lean to the Functional side more than OO, but I see the use of OO when something would make more sense as an abstract entity. Like a game object. Next I also go the simple route when doing something. When in contrast, it seems like sometimes the code I see from professional programmers is complicated for the sake of it! I use lots of closures. And lastly, I'm not the best commenter. I find it easier just to read through my code than reading the comment. And most cases I just end up reading the code even if there are comments. Plus I've been told that, because of how simply I write my code, it's very easy to read it. I hear professionally trained programmers go on and on about things like unit tests. Something I've never used before so I haven't even the faintest idea of what they are or how they work. Lots and lots of underscores "_", which aren't really my taste. Most of the techniques I use are straight from me, or a few books I've read. Don't know anything about MVC, I've heard a lot about it though with things like backbone.js. I think it's a way to organize an application. It just confuses me though because by now I've made my own organizational structures. It's a bit of a pain. I can't use template applications at all when learning something new like with Ubuntu's Quickly. I have trouble understanding code that I can tell is from someone trained. Complete OO programming really leaves a bad taste in my mouth, yet that seems to be what EVERYONE else is strictly using. It's left me not that confident in the look of my code, or wondering whether I'll cause sparks when joining a company or maybe contributing to open source projects. In fact I'm rather scared of the fact that people will eventually be checking out my code. Is this just something normal any programmer goes through or should I really look to change up my techniques?

    Read the article

  • E-Business Suite Sessions at Sangam 2013 in Hyderabad

    - by Sara Woodhull
    The Sangam 2013 conference, sponsored jointly by the All-India Oracle Users' Group (AIOUG) and India Oracle Applucations Users Group (IOAUG), will be in Hyderabad, India on November 8-9, 2013.  This year, the E-Business Suite Applications Technology Group (ATG) will offer two speaker sessions and a walk-in usability test of upcoming EBS user interface features.  It's only about two weeks away, so make your plans to attend if you are in India. Sessions Oracle E-Business Suite Technology: Latest Features and Roadmap Veshaal Singh, Senior Director, ATG Development Friday, Nov. 9, 11:00-12:00 This Oracle development session provides an overview of Oracle's product strategy for Oracle E-Business Suite technology, the capabilities and associated business benefits of recent releases, and a review of capabilities on the product roadmap. This is the cornerstone session for Oracle E-Business Suite technology. Come hear about the latest new usability enhancements of the user interface; systems administration and configuration management tools; security-related updates; and tools and options for extending, customizing, and integrating Oracle E-Business Suite with other applications. Integration Options for Oracle E-Business Suite Rekha Ayothi, Lead Product Manager, ATG Friday, Nov. 9, 2:00-3:00 In this Oracle development session, you will get an understanding of how, when and where you can leverage Oracle's integration technologies to connect end-to-end business processes across your enterprise, including your Oracle Applications portfolio. This session offers a technical look at Oracle E-Business Suite Integrated SOA Gateway, Oracle SOA Suite, Oracle Application Adapters for Data Integration for Oracle E-Business Suite, and other options for integrating Oracle E-Business Suite with other applications. Usability Testing There will be multiple opportunities to participate in usability testing at Sangam '13.  The User Experience team is running a one-on-one usability study that requires advance registration.  In addition, we will be hosting a special walk-in usability lab to get feedback for new Oracle E-Business Suite OA Framework features.  The walk-in lab is a shorter usability experience that does not require any pre-registration.  In both cases, Oracle wants your feedback!  Even if you only have a few minutes, come by the User Experience Lab, meet the team, and try the walk-in lab.

    Read the article

  • What Problems Are Better Solved By SOAP Over REST?

    In the battle for web service supremacy SOAP and REST have been battling for years. In my personal opinion this debate should have never existed. Yes, both forms can be used to create an interactive web service, but each form of a service was developed independent of each other to solve two different yet similar problems. Based my research and experience I would have to say that REST should be the preferred web service methodology and SOAP should only be used in specific situations. Note, I did not say that I was against SOAP, and in fact I actually like to use SOAP when it is needed. Criteria for using SOAP: Does the service need a guaranteed level of reliability and security? Did the provider and consumer of the service agreed on a standardized data exchange format? Does the service need data context and state management? If you answer yes to any of these questions, then you may want to consider SOAP as the format for the web service. Another way to look at the relationship between REST and SOAP is to look at the medical field.  For most things a general doctor or you family health care provider can acceptably treat most conditions from the case of a common cold to a broken bone. A general doctor more aligns with REST in my opinion because for most service requirements REST fulfills a projects needs, but what happens if you need more of an advanced examination, you would go to a specialist. A specialist would already have experience dealing with specific issues that you are experiencing giving them specific context to how best treat you going forward. SOAP acts more like a specialist doctor giving that they understand the context of an issue and can treat it based on the state of other patients they have already treated. An example of where I would use SOAP over REST in real life would be a single sign-on application. I n these cases I need to check validate a username and password for authentication and authorization of a web page request. This service would need to maintain state while it authenticated a user and while it validated access to a web page on a subsequent request. This service must process every request for access and not allow caching to ensure that every request is processed and the appropriate users are allowed to view selected web pages. References: Rozlog, M. (2010). REST and SOAP: When Should I Use Each (or Both)? Retrieved 11 20, 2011, from Infoq.com: http://www.infoq.com/articles/rest-soap-when-to-use-each

    Read the article

  • Favorite moments of JavaOne

    - by Tori Wieldt
    There are so many events and sessions to attend at JavaOne, it's unfair to ask people to choose just one thing they liked, but here are some favorite moments: I loved meeting many open source contributors and friends I have not met in person before and seeing that projects like e.g. Hudson are alive and kicking and have a great future ahead of them. -Manfred Moser My "The Problem with Women" session. It had LOADS of interactivity from the audience, who really helped to make that session.  I came out if it with a real sense of optimism - we love our jobs, we love what we do, and we should be proud of telling everyone about it to attract different talent into the industry. (Read her blog JavaOne: The Problem With Women - A Technical Approach for details.) -Trish Gee My kudos to Oracle for making the presentation materials quickly available to the public. Some of them were already available during JavaOne. Lots of slide decks are already there, and in some cases you may even find the video recordings too. Go to http://www.oracle.com/javaone and select JavaOne Technical Sessions.  -Yakov Fain I loved that not only was James Gosling present at the Community Keynote (which felt more like the keynotes of old times [big space, big screens, fun and tech]) but he was also found wandering the halls of the Hilton the day prior. Bring back James! Add back the toys section in the Community Keynote. Let the t-shirt tossing begin anew. These are "small" things that really fire up the community. -Andres Almiray Seeing James Gosling at JavaOne was a real shot in the arm for Java.  He needs to be there every year. -Frank Greco +42 on having James and the T-shirt tossing. -Stephan Janssen The session "Integrate Java with Robots, Home Automation, Musical Instruments, and Kinect." Fabiane Nardon explained connecting Jenkins to jHome to a truck horn placed in their sysadmin's bedroom. She dubbed it "extreme feedback."  -Tori Wieldt The User Group Forum [on Sunday] was a success! Congratulations Bruno Souza and John Yeary and everybody that were involved. I believe it really helps to increase community participation! There were lots of interesting talks, and great discussion with JUG leaders and members. Thank you Oracle for supporting that! -Yara Senger What was your favorite moment? Please comment! 

    Read the article

  • SQL SERVER – Iridium I/O – SQL Server Deduplication that Shrinks Databases and Improves Performance

    - by Pinal Dave
    Database performance is a common problem for SQL Server DBA’s.  It seems like we spend more time on performance than just about anything else.  In many cases, we use scripts or tools that point out performance bottlenecks but we don’t have any way to fix them.  For example, what do you do when you need to speed up a query that is already tuned as well as possible?  Or what do you do when you aren’t allowed to make changes for a database supporting a purchased application? Iridium I/O for SQL Server was originally built at Confio software (makers of Ignite) because DBA’s kept asking for a way to actually fix performance instead of just pointing out performance problems. The technology is certified by Microsoft and was so promising that it was spun out into a separate company that is now run by the Confio Founder/CEO and technology management team. Iridium uses deduplication technology to both shrink the databases as well as boost IO performance.  It is intriguing to see it work.  It will deduplicate a live database as it is running transactions.  You can watch the database get smaller while user queries are running. Iridium is a simple tool to use. After installing the software, you click an “Analyze” button which will spend a minute or two on each database and estimate both your storage and performance savings.  Next, you click an “Activate” button to turn on Iridium I/O for your selected databases.  You don’t need to reboot the operating system or restart the database during any part of the process. As part of my test, I also wanted to see if there would be an impact on my databases when Iridium was removed.  The ‘revert’ process (bringing the files back to their SQL Server native format) was executed by a simple click of a button, and completed while the databases were available for normal processing. I was impressed and enjoyed playing with the software and encourage all of you to try it out.  Here is the link to the website to download Iridium for free. . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Coding different states in Adventure Games

    - by Cardin
    I'm planning out an adventure game, and can't figure out what's the right way to implement the behaviour of a level depending on state of story progression. My single-player game features a huge world where the player has to interact with people in a town at various points in the game. However, depending on story progression, different things would be presented to the player, for e.g. the Guild Leader will change locations from the town square to various locations around the city; Doors would only unlock at certain times of the day after finishing a particular routine; Different cut-screen/trigger events happen only after a particular milestone has been reached. I naively thought of using a switch{} statement initially to decide what the NPC should say or which he could be found at, and making quest objectives interact-able only after checking a global game_state variable's condition. But I realised I would quickly run into a lot of different game states and switch-cases in order to change the behaviour of an object. That switch statement would also be massively hard to debug, and I guess it might also be hard to use in a level editor. So I thought, instead of having a single object with multiple states, maybe I should have multiple instances of the same object, with a single state. That way, if I use something like a level editor, I can put an instance of the NPC at all the different locations he could ever appear at, and also an instance for each conversation state he has. But that means there'll be a lot of inactive, invisible game objects floating around the level, which might be trouble for memory, or simply hard to see in a level editor, i don't know. Or simply, make an identical, but separate level for each game state. This feels the cleanest and bug-free way to do things, but it feels like massive manual work making sure each version of the level is really identical to each other. All my methods feel so inefficient, so to recap my question, is there a better or standardised way to implement behaviour of a level depending on state of story progression? PS: I don't have a level editor yet - thinking of using something like JME SDK or making my own.

    Read the article

  • Role based access control in Oracle VM using Enterprise Manager 12c

    - by Ronen Kofman
    Enterprise Managers let’s you control any element in the environment and define which users can do what on each element. We will show here an example on how to set up RBAC (Role Base Access Control) for Oracle VM using Enterprise Manager, this will be a very simplified explanation  to help you get going. For more comprehensive explanations please refer to the Enterprise Manager User Guide. OK, first some basic Enterprise Manager terminology: Target – any element in the environment is a target – server, pool, zone, VM etc. Administrators – these are the Enterprise Manager users who can login to the platform. Roles – roles are privilege profiles which could be applied to Administrators. The first step will be to discover the virtual environment and bring it in to Enterprise Manager, this process is simple and can be done in two ways: Work on your Oracle VM manager, set it up until you feel comfortable and then register it in Enterprise Manager Use Enterprise Manager and build it all from there. In both cases we will be able to see the same picture from Oracle VM and from Enterprise Manager, any change made in one will be reflected in the other. Oracle VM Manager: Enterprise Manager: Once you have your virtual environment set up in Enterprise Manager it is time to start associating VMs with users (or Administrators as they are called in Enterprise Manager). Enterprise Manager allows us to connect to multiple different identity services and import users from them but the simplest way to add Administrators is just go to setup->security->Administrators and create new Administrator. The creation wizard will walk you through several stages and allow you to assign role(s) to your newly created Administrator, using roles can really shorten the process if done multiple times. When you get to “Target Privileges” stage, scroll down to the bottom to the “Target Privileges” section. In this section you can add targets (virtual machine in our case) and define the type of privileges you would like to assign to the Administrator which you are creating. In this example I chose one of the VMs and granted full privileges to the newly created Administrator. Administrator creation wizard "Target Privileges": Now when you login as the newly created administrator, you will only see the VM that was assign to you and will be able to have full control over it. That’s it, simple and straight forward, Enterprise Manager offers many more things which I skipped here but the point is that if you need role based access control Enterprise Manager can give it to you in a very easy way. Oh and one more thing, virtualization management in Enterprise Manager has no license cost, sweet.

    Read the article

  • Projected Results: Sound project management practices, combined with a complete technology platform, have an immediate and lasting impact on an organization’s bottom line.

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Article By: Alan Joch, is a business and technology writer who specializes in enterprise applications, cloud computing, mobile computing, and the Web. It’s no secret that complex, large-scale projects need close management controls to ensure that they’re delivered on time and on budget. But now there’s growing evidence that failing to meet these goals can have far-reaching consequences, not only for the reputations and value of individual organizations but also for the tenure of their top executives. Government watchdogs forced one large contractor to suspend a multibillion-dollar defense program—and delay payment receipts—until a better management system was launched to more accurately track spending, project milestones, and other fundamental metrics. Significant delays in the opening of the £4.3 billion Terminal 5 at Heathrow Airport impaired an airline’s operations and contributed to a drop in its share prices. These real-world examples are noteworthy because of the huge financial risks they created. They’re also far from being isolated cases. Research by the Economist Intelligence Unit found that only 11 percent of companies claimed they delivered expected ROI on major capital projects 90 percent of the time or more. In addition, 12 percent of respondents said they achieved planned ROI less than half the time. According to Phil Thornton, lead consultant at the analyst firm Clarity Economics, the numbers demonstrate obvious challenges related to managing risks, accurately predicting ROI, and consistently delivering bottom-line growth for major capital investments “Portfolio management is a path to improve your organization’s competitive advantage. It helps make sure your organization is investing in the right things and not spending its time on things that are not delivering the intended results for the firm.” Read the full article here

    Read the article

  • MySQL and Hadoop Integration - Unlocking New Insight

    - by Mat Keep
    “Big Data” offers the potential for organizations to revolutionize their operations. With the volume of business data doubling every 1.2 years, analysts and business users are discovering very real benefits when integrating and analyzing data from multiple sources, enabling deeper insight into their customers, partners, and business processes. As the world’s most popular open source database, and the most deployed database in the web and cloud, MySQL is a key component of many big data platforms, with Hadoop vendors estimating 80% of deployments are integrated with MySQL. The new Guide to MySQL and Hadoop presents the tools enabling integration between the two data platforms, supporting the data lifecycle from acquisition and organisation to analysis and visualisation / decision, as shown in the figure below The Guide details each of these stages and the technologies supporting them: Acquire: Through new NoSQL APIs, MySQL is able to ingest high volume, high velocity data, without sacrificing ACID guarantees, thereby ensuring data quality. Real-time analytics can also be run against newly acquired data, enabling immediate business insight, before data is loaded into Hadoop. In addition, sensitive data can be pre-processed, for example healthcare or financial services records can be anonymized, before transfer to Hadoop. Organize: Data is transferred from MySQL tables to Hadoop using Apache Sqoop. With the MySQL Binlog (Binary Log) API, users can also invoke real-time change data capture processes to stream updates to HDFS. Analyze: Multi-structured data ingested from multiple sources is consolidated and processed within the Hadoop platform. Decide: The results of the analysis are loaded back to MySQL via Apache Sqoop where they inform real-time operational processes or provide source data for BI analytics tools. So how are companies taking advantage of this today? As an example, on-line retailers can use big data from their web properties to better understand site visitors’ activities, such as paths through the site, pages viewed, and comments posted. This knowledge can be combined with user profiles and purchasing history to gain a better understanding of customers, and the delivery of highly targeted offers. Of course, it is not just in the web that big data can make a difference. Every business activity can benefit, with other common use cases including: - Sentiment analysis; - Marketing campaign analysis; - Customer churn modeling; - Fraud detection; - Research and Development; - Risk Modeling; - And more. As the guide discusses, Big Data is promising a significant transformation of the way organizations leverage data to run their businesses. MySQL can be seamlessly integrated within a Big Data lifecycle, enabling the unification of multi-structured data into common data platforms, taking advantage of all new data sources and yielding more insight than was ever previously imaginable. Download the guide to MySQL and Hadoop integration to learn more. I'd also be interested in hearing about how you are integrating MySQL with Hadoop today, and your requirements for the future, so please use the comments on this blog to share your insights.

    Read the article

  • Extending the ADF Controller exception handler

    - by frank.nimphius
    The Oracle ADF controller provides a declarative option for developers to define a view activity, method activity or router activity to handle exceptions in bounded or unbounded task flows. Exception handling however is for exceptions only and not handling all types of Throwable. Furthermore, exceptions that occur during the JSF RENDER RESPONSE phase are not looked at either as it is considered too late in the cycle. For developers to try themselves to handle unhandled exceptions in ADF Controller, it is possible to extend the default exception handling, while still leveraging the declarative configuration. To add your own exception handler: · Create a Java class that extends ExceptionHandler · Create a textfile with the name “oracle.adf.view.rich.context.Exceptionhandler” (without the quotes) and store it in .adf\META-INF\services (you need to create the “services” folder) · In the file, add the absolute name of your custom exception handler class (package name and class name without the “.class” extension) For any exception you don't handle in your custom exception handler, just re-throw it for the default handler to give it a try … import oracle.adf.view.rich.context.ExceptionHandler; public class MyCustomExceptionHandler extends ExceptionHandler { public MyCustomExceptionHandler() {      super(); } public void handleException(FacesContext facesContext,                              Throwable throwable, PhaseId phaseId)                              throws Throwable {    String error_message;    error_message = throwable.getMessage();    //check error message and handle it if you can    if( … ){          //handle exception        …    }    else{       //delegate to the default ADFc exception handler        throw throwable;}    } } Note however, that it is recommended to first try and handle exceptions with the ADF Controller default exception handling mechanism. In the past, I've seen attempts on OTN to handle regular application use cases with custom exception handlers for where there was no need to override the exception handler. So don't go for this solution to quickly and always think of alternative solutions. Sometimes a try-catch-final block does it better than sophisticated web exception handling.

    Read the article

  • Should I be put off a junior role that uses an online development test?

    - by Ninefingers
    I've applied for a junior development role, or rather been found by a recruiter looking for a developer. In order to get to a telephone interview stage I've been asked to sit one of those online coding assessments. This wasn't quite what I expected. I consider myself a fairly good developer for my age and experience, but I've no illusions about being Don Knuth or anything. The test was a series of incredibly obtuse questions asking about the results of various obscure evaluations. About 30 minutes in I was thinking to myself I hadn't intended to enter an obfuscated code contest/code golf exercise. After my last telephone interview I was asked to build something. I did. That seemed fair. Go away and work this out is more my in office experience of programming than "please evaluate this combination of lambdas, filters, maps, lists, tuples etc". So I'm a little put off, to be honest. I never claimed to know the language inside out or all the little corner cases. My questions, then: Should I be put off? Why? Why not? Are these kinds of tests what I should be expecting for junior roles? Should I learn stuff exam style? That seems to be the objective of these tests, for which you are timed and not supposed to use references or books? Normally, in the course of development I have a fairly good idea of basic types, rules, flow control and whatever. Occasionally I'll come up on something I need to use a regex for and have to go and remind myself of the exact piece of syntax I need if trying what I think should work doesn't. Or I'll come up against a module I've not used before and go and look it up. For example, if I wanted to write a server using sockets in C right now, I'd probably check the last piece of code I wrote doing that (and or the various books I have) and work from there. Chances are I probably couldn't do it exactly from scratch and from memory, although I can tell you you'd need a socket(), bind(), listen() and accept() call and you might also want select() depending on whether you intend to pthread_create or not. So I know what the calls are, but not their specific parameter list. What are your experiences if you are a recruiting manager? Are you after programmers who can quote you the API or do you not mind if your programmers have a few books on their desk and google function calls every so often?

    Read the article

  • ObjectStorageHelper<T> now available for Windows 8 RTM

    - by jamiet
    In October 2011 I wrote a blog post entitled ObjectStorageHelper<T> – A WinRT utility for Windows 8 where I introduced a little utility class called ObjectStorageHelper<T> that I had been working on while noodling around on the Developer Preview of Windows 8. ObjectStorageHelper<T> makes it easy for anyone building apps for Windows 8 to save data to files. How easy? As easy as this: var myPoco = new Poco() { IntProp = 1, StringProp = "one" }; var objectStorageHelper = new ObjectStorageHelper<Poco>(StorageType.Local); await objectStorageHelper.SaveAsync(myPoco); Compare that to the plumbing code that you would have to write otherwise: var Obj = new Poco() { IntProp = 1, StringProp = "one" }; StorageFile file = null; StorageFolder folder = GetFolder(storageType); file = await folder.CreateFileAsync(FileName(Obj), CreationCollisionOption.ReplaceExisting); IRandomAccessStream writeStream = await file.OpenAsync(FileAccessMode.ReadWrite); using (Stream outStream = Task.Run(() => writeStream.AsStreamForWrite()).Result) {     serializer.Serialize(outStream, Obj);     await outStream.FlushAsync(); } and you can see how ObjectStorageHelper<T> can help save a Windows 8 developer quite a few headaches. ObjectStorageHelper<T> simply requires you to pass it an object to be saved, tell it where to save it (Roaming, Local or Temporary), and you’re done. Retrieving an object from storage is equally as simple: var objectStorageHelper = new ObjectStorageHelper<Poco>(StorageType.Local); var myPoco = await objectStorageHelper.LoadAsync(); Please check the homepage for the project at http://winrtstoragehelper.codeplex.com/ for (much) more info. A number of people have used and tested ObjectStorageHelper<T> since those early days and one of those folks in particular, David Burela, was good enough to report a couple of bugs: Saving Asynchronously Save fails when class is in another project As a result of David’s bug reports and some more extensive testing on my side I have overhauled the initial code that I wrote last October and am confident that it is now much more robust and ready for primetime (check the commit history if you’re interested). The source code (which, again, you can find on Codeplex at http://winrtstoragehelper.codeplex.com/) includes a suite of unit tests to test all of the basic use cases (if you can think of any more please let me know). If you use this in any of your Windows 8 projects then please let me know. I love getting feedback and I’d also love to know if this is actually being used anywhere. @Jamiet

    Read the article

  • Equal Gifts Algorithm Problem

    - by 7Aces
    Problem Link - http://opc.iarcs.org.in/index.php/problems/EQGIFTS It is Lavanya's birthday and several families have been invited for the birthday party. As is customary, all of them have brought gifts for Lavanya as well as her brother Nikhil. Since their friends are all of the erudite kind, everyone has brought a pair of books. Unfortunately, the gift givers did not clearly indicate which book in the pair is for Lavanya and which one is for Nikhil. Now it is up to their father to divide up these books between them. He has decided that from each of these pairs, one book will go to Lavanya and one to Nikhil. Moreover, since Nikhil is quite a keen observer of the value of gifts, the books have to be divided in such a manner that the total value of the books for Lavanya is as close as possible to total value of the books for Nikhil. Since Lavanya and Nikhil are kids, no book that has been gifted will have a value higher than 300 Rupees... For the problem, I couldn't think of anything except recursion. The code I wrote is given below. But the problem is that the code is time-inefficient and gives TLE (Time Limit Exceeded) for 9 out of 10 test cases! What would be a better approach to the problem? Code - #include<cstdio> #include<climits> #include<algorithm> using namespace std; int n,g[150][2]; int diff(int a,int b,int f) { ++f; if(f==n) { if(a>b) { return a-b; } else { return b-a; } } return min(diff(a+g[f][0],b+g[f][1],f),diff(a+g[f][1],b+g[f][0],f)); } int main() { int i; scanf("%d",&n); for(i=0;i<n;++i) { scanf("%d%d",&g[i][0],&g[i][1]); } printf("%d",diff(g[0][0],g[0][1],0)); } Note - It is just a practice question, & is not part of a competition.

    Read the article

  • Manageability at Oracle Openworld: a Guide to sessions for Partners

    - by Javier Puerta
    A large number of sessions focusing on Manageability will be taking place during the week of Oracle Openworld in San Francisco. To help you organize your schedule I am including below a list of sessions and events around Manageability that you will find of interest. PARTNER SPECIFIC SESSIONS Date/Time/Location  Session   Monday, October 1st, 2011 at 15:30 - 18:00 PST Grand Hyatt San Francisco 345 Stockton Street, San Francisco (Conference Theater) (It is a 15 minute walk from OOW Moscone Center. See directions here) Exadata & Manageability EMEA Partner Community Forum.- Listen to other partners share their experiences in selling and implementing Exadata and Manageability projects, and have a direct dialogue with some of the Oracle executives that are driving the strategy of the company in these areas. Agenda Welcome - Hans-Peter Kipfer, VP, Engineered Systems Oracle EMEA Next challenges in building and managing clouds - Javier Cabrerizo, VP, Business Development for Exadata, Oracle Corp. Partner Experiences: IT modernization, simplification and cost reduction: The case of a customer in Transportation & Logistics with custom applications and SAP. - Francisco Bermudez, Country Leader Infrastructure Services, Capgemini, Spain Nvision cloud project - Dmitry Krasilov, Head of Oracle Competence Center, Nvision Group, Russia From Exadata Ready to Exadata Optimized: An ISV Experience - Miguel Alves, Product Business Solutions Manager, WeDo Technologies, Portugal To confirm your participation send an email to [email protected] Tuesday, Oct 2, 11:45 AM - 12:45 PM - Marriott Marquis - Golden Gate A Developing Services for Private and Public Clouds.- The Oracle Cloud provides new business opportunities, secures business applications and data, and provides operational efficiencies and cost savings. For customers lacking the skill or time to architect, develop, or build a cloud, there is a growing demand for services practice partners that can deliver and manage Oracle Cloud solutions. In this session,• Become familiar with services examples and use cases that demonstrate how an Oracle Cloud can provide a solution to a customer’s needs today• Learn about Oracle architecture and best practices available for an Oracle Cloud instances• Identify the right Oracle technology and the optimal model for meeting customer needs while providing excellent revenues and an optimal margin for services delivered Wednesday, Oct 3, 1:15 PM - 2:15 PM - Marriott Marquis - Golden Gate B Using Management Already Built into Oracle Products: Oracle Enterprise Manager .- Engineered into Oracle products are management capabilities ready to be used. In this session, applicable to all partners, understand the growing market opportunities and how to use or include Oracle Enterprise Manager as part of your solution or services. Other Cloud sessions for Partners at the Oracle PartnerNetwork Exchange  Click here.-     OOW CUSTOMER SESSIONS   Download the Focus On Oracle Enterprise Manager Cloud Control 12c (and Private Cloud) guide for a full list of Exadata OOW sessions.  

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >