Search Results

Search found 8605 results on 345 pages for 'general dynamics'.

Page 241/345 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • How to Create a Send/Receive Group for RSS Feeds in Outlook 2013

    - by Lori Kaufman
    If you choose to manually update your RSS feeds on demand, there is a way to do this without having to send and receive your email at the same time. You can create a special Send/Receive Group for your RSS feeds. NOTE: If you choose to not have your RSS feeds updated automatically, creating a separate Send/Receive Group for your RSS feeds is useful so you can update them when you want to. To begin creating a new Send/Receive Group, click the File tab. Click Options in the menu on the left side of the Account Information screen. On the Outlook Options dialog box, click Advanced in the left pane list of menu options. In the right pane, scroll down to the Send and receive section and click the Send/Receive button. On the Send/Receive Groups dialog box, click New next to the list of groups. On the Send/Receive Group Name dialog box, enter a name, such as “RSS Feeds On Demand Only,” in the edit box and click OK. For all the other Accounts, except RSS, in the list on the left, de-select the Include RSS Feeds in this Send/Receive group check box so there is NO check mark in the box. Click RSS under Accounts, and make sure the Include RSS Feeds in this Send/Receive group check box is selected. NOTE: If you want to have a separate Send/Receive group for each RSS Feed or group certain RSS feeds together, you can turn on and off specific feeds in the lower half of the Send/Receive Settings dialog box. If you decide to do this, you might specify a more appropriate name for each Send/Receive group for the RSS feeds. Click OK to accept your changes and close the Send/Receive dialog box. Make sure your new Send/Receive group is selected in the list of groups on the Send/Receive Groups dialog box. De-select all the options under Setting for group section at the bottom of the dialog box and click Close. This prevents this group from being updated when you click the general Send/Receive button to retrieve your email. Click OK on the Outlook Options dialog box. To manually update your RSS feeds, click the Send / Receive tab. Click Send/Receive Groups and select your new group from the drop-down list. You can change, rename, or remove any Send/Receive Groups you create by accessing the Send/Receive Groups dialog box again.     

    Read the article

  • how to update child records when updating the Master table using Linq [closed]

    - by user20358
    I currently use a general repositry class that can update only a single table like so public abstract class MyRepository<T> : IRepository<T> where T : class { protected IObjectSet<T> _objectSet; protected ObjectContext _context; public MyRepository(ObjectContext Context) { _objectSet = Context.CreateObjectSet<T>(); _context = Context; } public IQueryable<T> GetAll() { return _objectSet.AsQueryable(); } public IQueryable<T> Find(Expression<Func<T, bool>> filter) { return _objectSet.Where(filter); } public void Add(T entity) { _objectSet.AddObject(entity); _context.ObjectStateManager.ChangeObjectState(entity, System.Data.EntityState.Added); _context.SaveChanges(); } public void Update(T entity) { _context.ObjectStateManager.ChangeObjectState(entity, System.Data.EntityState.Modified); _context.SaveChanges(); } public void Delete(T entity) { _objectSet.Attach(entity); _context.ObjectStateManager.ChangeObjectState(entity, System.Data.EntityState.Deleted); _objectSet.DeleteObject(entity); _context.SaveChanges(); } } For every table class generated by my EDMX designer I create another class like this public class CustomerRepo : MyRepository<Customer> { public CustomerRepo (ObjectContext context) : base(context) { } } for any updates that I need to make to a particular table I do this: Customer CustomerObj = new Customer(); CustomerObj.Prop1 = ... CustomerObj.Prop2 = ... CustomerObj.Prop3 = ... CustomerRepo.Update(CustomerObj); This works perfectly well when I am updating just to the specific table called Customer. Now if I need to also update each row of another table which is a child of Customer called Orders what changes do I need to make to the class MyRepository. Orders table will have multiple records for a Customer record and multiple fields too, say for example Field1, Field2, Field3. So my questions are: 1.) If I only need to update Field1 of the Orders table for some rows based on a condition and Field2 for some other rows based on a different condition then what changes I need to do? 2.) If there is no such condition and all child rows need to be updated with the same value for all rows then what changes do I need to do? Thanks for taking the time. Look forward to your inputs...

    Read the article

  • Increasing deadlocks with NoLock

    - by Dave Ballantyne
    One on my personnel pet issues is with inappropriate use of the NOLOCK hint (and read uncommitted) .  Dont get me wrong, I have used it in exceptional circumstances , but as a general statement it is a bad thing.  Mostly , when NOLOCK, is used the discussion is around a single statement,  “it runs faster with nolock for XYZ reason”,  however ,IMO, this is quite a shorted sighted view.  What about the Transaction ? What about other concurrent users ?  What is good for one statement in isolation , does not mean that it is good for the system as a whole.  I have seen on a number of occasions deadlocks happen, when tasks that would of(and should of) be blocked continue to execute, only for a deadlock to occur at a later data writing (INSERT,UPDATE,DELETE) statement.  Writers will block writers regardless of isolation level. By Way of (fairly contrived ) example , lets generate some dummy tables and populate with some data drop table a go drop table b go Create Table a ( col1 integer ) go insert into a values(1) insert into a values(2) go Create Table b ( col1 integer ) go insert into b values(1) insert into b values(2) go   Now make two connections. In connection one execute set transaction isolation level read committed BEGIN TRAN Select * from a Select * from b delete from a In connection two execute set transaction isolation level read committed BEGIN TRAN Select * from a Select * from b delete from b Right now the ‘select from a’ in connection two is being blocked by the ‘delete from a’ in connection one.  This is ,IMO, quite a healthy and natural thing to be happening , some see this as a ‘slow down’, a drop in performance.  So, lets reach for our ‘NOLOCK’ magic pill.  Cancel the blocked query and ROLLBACK both transactions, then in connection one execute set transaction isolation level read uncommitted BEGIN TRAN Select * from a Select * from b delete from b and then in connection two execute set transaction isolation level read uncommitted BEGIN TRAN Select * from a Select * from b delete from a We have now solved out performance problem , no more blocking.  Lets finish the work required by the transaction, in connection one , execute delete from a Oh, ‘ performance problem’ again , its now being blocked. Still, lets complete the work in connection two…. delete from b DEADLOCK!!  It is important to be clear about the role of the select statements.  They do not participate within the deadlock, but are preventing code executing that would of.   Additionally, without the select readers to block, a deadlock would occur on the deletes with READ COMMITTED. Naturally, other isolation levels will exhibit different behaviour as to where and when they will and wont block,  and I would encourage you to read BOL and satisfy yourself that you really do NEED to NOLOCK.

    Read the article

  • Log Debug Messages without Debug Serial on Shipped Device

    - by Kate Moss' Open Space
    Debug message is one of the ancient but useful way for problem resolving. Message is redirected to PB if KITL is enabled otherwise it goes to default debug port, usually a serial port on most of the platform but it really depends on how OEMWriteDebugString and OEMWriteDebugByte are implemented. For many reasons, we don't want to have a debug serial port, for example, we don't have enough spare serial ports and it can affect the performance. So some of the BSP designers decide to dump the messages into other media, could be a log file, shared memory or any solution that is suitable for the need. In CE 5.0 and previous, OAL and Kernel are linked into one binaries; in the other word, you can use whatever function in kernel, such as SC_CreateFileW to access filesystem in OAL, even this is strongly not recommended. But since the OAL is being a standalone executable in CE 6.0, we no longer can use this back door but only interface exported in NKGlobal which just provides enough for OAL but no more. Accessing filesystem or using sync object to communicate to other drivers or application is even not an option. Sounds like the kernel lock itself up; of course, OAL is in kernel space, you can still do whatever you want to hack into kernel, but once again, it is not only make it a dirty solution but also fragile. So isn't there an elegant solution? Let's see how a debug message print out. In private\winceos\COREOS\nk\kernel\printf.c, the OutputDebugStringW is the one for pumping out the messages; most of the code is for error handling and serialization but what really interesting is the following code piece     if (g_cInterruptsOff) {         OEMWriteDebugString ((unsigned short *)str);     } else {         g_pNKGlobal->pfnWriteDebugString ((unsigned short *)str);     }     CELOG_OutputDebugString(dwActvProcId, dwCurThId, str); It outputs the message to default debug output (is redirected to KITL when available) or OAL when needed but note that highlight part, it also invokes CELOG_OutputDebugString. Follow the thread to private\winceos\COREOS\nk\logger\CeLogInstrumentation.c, this function dump whatever input to CELOG. So whatever the debug message is we always got a clone in CELOG. General speaking, all of the debug message is logged to CELOG already, so what you need to do is using celogflush.exe with CELZONE_DEBUG zone, and then viewing the data using the by Readlog tool. Here are some information about these tools CELOG - http://msdn.microsoft.com/en-us/library/ee479818.aspx READLOG - http://msdn.microsoft.com/en-us/library/ee481220.aspx Also for advanced reader, I encourage you to dig into private\winceos\COREOS\nk\celog\celogdll, the source of CELOG.DLL and use it as a starting point to create a more lightweight debug message logger for your own device!

    Read the article

  • Developer – Cross-Platform: Fact or Fiction?

    - by Pinal Dave
    This is a guest blog post by Jeff McVeigh. Jeff McVeigh is the general manager of Performance Client and Visual Computing within Intel’s Developer Products Division. His team is responsible for the development and delivery of leading software products for performance-centric application developers spanning Android*, Windows*, and OS* X operating systems. During his 17-year career at Intel, Jeff has held various technical and management positions in the fields of media, graphics, and validation. He also served as the technical assistant to Intel’s CTO. He holds 20 patents and a Ph.D. in electrical and computer engineering from Carnegie Mellon University. It’s not a homogenous world. We all know it. I have a Windows* desktop, a MacBook Air*, an Android phone, and my kids are 100% Apple. We used to have 2.5 kids, now we have 2.5 devices. And we all agree that diversity is great, unless you’re a developer trying to prioritize the limited hours in the day. Then it’s a series of trade-offs. Do we become brand loyalists for Google or Apple or Microsoft? Do we specialize on phones and tablets or still consider the 300M+ PC shipments a year when we make our decisions on where to spend our time and resources? We weigh the platform options, monetization opportunities, APIs, and distribution models. Too often, I see developers choose one platform, or write to the lowest common denominator, which limits their reach and market success. But who wants to be ?me too”? Cross-platform coding is possible in some environments, for some applications, for some level of innovation—but it’s not all-inclusive, yet. There are some tricks of the trade to develop cross-platform, including using languages and environments that ?run everywhere.” HTML5 is today’s answer for web-enabled platforms. However, it’s not a panacea, especially if your app requires the ultimate performance or native UI look and feel. There are other cross-platform frameworks that address the presentation layer of your application. But for those apps that have a preponderance of native code (e.g., highly-tuned C/C++ loops), there aren’t tons of solutions today to help with code reuse across these platforms using consistent tools and libraries. As we move forward with interim solutions, they’ll improve and become more robust, based, in no small part, on our input. What’s your answer to the cross-platform challenge? Are you fully invested in HTML5 now? What are your barriers? What’s your vision to navigate the cross-platform landscape?  Here is the link where you can head next and learn more about how to answer the questions I have asked: https://software.intel.com/en-us Republished with permission from here. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Intel

    Read the article

  • Is throwing an error in unpredictable subclass-specific circumstances a violation of LSP?

    - by Motti Strom
    Say, I wanted to create a Java List<String> (see spec) implementation that uses a complex subsystem, such as a database or file system, for its store so that it becomes a simple persistent collection rather than an basic in-memory one. (We're limiting it specifically to a List of Strings for the purposes of discussion, but it could extended to automatically de-/serialise any object, with some help. We can also provide persistent Sets, Maps and so on in this way too.) So here's a skeleton implementation: class DbBackedList implements List<String> { private DbBackedList() {} /** Returns a list, possibly non-empty */ public static getList() { return new DbBackedList(); } public String get(int index) { return Db.getTable().getRow(i).asString(); // may throw DbExceptions! } // add(String), add(int, String), etc. ... } My problem lies with the fact that the underlying DB API may encounter connection errors that are not specified in the List interface that it should throw. My problem is whether this violates Liskov's Substitution Principle (LSP). Bob Martin actually gives an example of a PersistentSet in his paper on LSP that violates LSP. The difference is that his newly-specified Exception there is determined by the inserted value and so is strengthening the precondition. In my case the connection/read error is unpredictable and due to external factors and so is not technically a new precondition, merely an error of circumstance, perhaps like OutOfMemoryError which can occur even when unspecified. In normal circumstances, the new Error/Exception might never be thrown. (The caller could catch if it is aware of the possibility, just as a memory-restricted Java program might specifically catch OOME.) Is this therefore a valid argument for throwing an extra error and can I still claim to be a valid java.util.List (or pick your SDK/language/collection in general) and not in violation of LSP? If this does indeed violate LSP and thus not practically usable, I have provided two less-palatable alternative solutions as answers that you can comment on, see below. Footnote: Use Cases In the simplest case, the goal is to provide a familiar interface for cases when (say) a database is just being used as a persistent list, and allow regular List operations such as search, subList and iteration. Another, more adventurous, use-case is as a slot-in replacement for libraries that work with basic Lists, e.g if we have a third-party task queue that usually works with a plain List: new TaskWorkQueue(new ArrayList<String>()).start() which is susceptible to losing all it's queue in event of a crash, if we just replace this with: new TaskWorkQueue(new DbBackedList()).start() we get a instant persistence and the ability to share the tasks amongst more than one machine. In either case, we could either handle connection/read exceptions that are thrown, perhaps retrying the connection/read first, or allow them to throw and crash the program (e.g. if we can't change the TaskWorkQueue code).

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

  • JavaScript: Code Folding

    - by Petr
    Today I would like to mentioned code folding in the new JavaScript editor support, which is available in the continual builds from our server. It's a basic feature, but was mentioned in a comment under the mentioned post. So you can fold comments and every code block between { and }. The current support allows only methods to be folded. The difference is shown below. In the picture on the left side is the current folding and on the right side the new one.   The code folding can be switched off in the Editor Options (Tools main menu -> Options -> Editor category -> General Tab). In this dialog you can also define which folds should be collapsed by default when you open a file. These options more closely fit Java editor needs, but you can see in the next picture how the options are mapped for JavaScript code.  The Method option folds all functions in the code. Other code blogs are fold through the option Tags and Other Code Blogs.  The documentation comments (starts with /**) are fold through Javadoc Comments and when you check Initial Comment, then all comments that start with /* are folded by default.  The new JavaScript editor also supports custom folds. To add your custom fold, type in two special comments as shown in this example: // <editor-fold> Your code goes here... // </editor-fold> You can define the default description of a collapsed fold by adding a "desc" attribute: // <editor-fold desc="This is my super secret genius code."> Your code goes here... // </editor-fold> You can set a fold to be collapsed by default by adding a "defaultstate" attribute: // <editor-fold defaultstate="collapsed"> Your code goes here... // </editor-fold> There is a code template that helps with writing custom fold comments. The abbreviation for the template is fcom. As I wrote the new JS support is available in the continual builds. Go here for more info.

    Read the article

  • The Sensemaking Spectrum for Business Analytics: Translating from Data to Business Through Analysis

    - by Joe Lamantia
    One of the most compelling outcomes of our strategic research efforts over the past several years is a growing vocabulary that articulates our cumulative understanding of the deep structure of the domains of discovery and business analytics. Modes are one example of the deep structure we’ve found.  After looking at discovery activities across a very wide range of industries, question types, business needs, and problem solving approaches, we've identified distinct and recurring kinds of sensemaking activity, independent of context.  We label these activities Modes: Explore, compare, and comprehend are three of the nine recognizable modes.  Modes describe *how* people go about realizing insights.  (Read more about the programmatic research and formal academic grounding and discussion of the modes here: https://www.researchgate.net/publication/235971352_A_Taxonomy_of_Enterprise_Search_and_Discovery) By analogy to languages, modes are the 'verbs' of discovery activity.  When applied to the practical questions of product strategy and development, the modes of discovery allow one to identify what kinds of analytical activity a product, platform, or solution needs to support across a spread of usage scenarios, and then make concrete and well-informed decisions about every aspect of the solution, from high-level capabilities, to which specific types of information visualizations better enable these scenarios for the types of data users will analyze. The modes are a powerful generative tool for product making, but if you've spent time with young children, or had a really bad hangover (or both at the same time...), you understand the difficult of communicating using only verbs.  So I'm happy to share that we've found traction on another facet of the deep structure of discovery and business analytics.  Continuing the language analogy, we've identified some of the ‘nouns’ in the language of discovery: specifically, the consistently recurring aspects of a business that people are looking for insight into.  We call these discovery Subjects, since they identify *what* people focus on during discovery efforts, rather than *how* they go about discovery as with the Modes. Defining the collection of Subjects people repeatedly focus on allows us to understand and articulate sense making needs and activity in more specific, consistent, and complete fashion.  In combination with the Modes, we can use Subjects to concretely identify and define scenarios that describe people’s analytical needs and goals.  For example, a scenario such as ‘Explore [a Mode] the attrition rates [a Measure, one type of Subject] of our largest customers [Entities, another type of Subject] clearly captures the nature of the activity — exploration of trends vs. deep analysis of underlying factors — and the central focus — attrition rates for customers above a certain set of size criteria — from which follow many of the specifics needed to address this scenario in terms of data, analytical tools, and methods. We can also use Subjects to translate effectively between the different perspectives that shape discovery efforts, reducing ambiguity and increasing impact on both sides the perspective divide.  For example, from the language of business, which often motivates analytical work by asking questions in business terms, to the perspective of analysis.  The question posed to a Data Scientist or analyst may be something like “Why are sales of our new kinds of potato chips to our largest customers fluctuating unexpectedly this year?” or “Where can innovate, by expanding our product portfolio to meet unmet needs?”.  Analysts translate questions and beliefs like these into one or more empirical discovery efforts that more formally and granularly indicate the plan, methods, tools, and desired outcomes of analysis.  From the perspective of analysis this second question might become, “Which customer needs of type ‘A', identified and measured in terms of ‘B’, that are not directly or indirectly addressed by any of our current products, offer 'X' potential for ‘Y' positive return on the investment ‘Z' required to launch a new offering, in time frame ‘W’?  And how do these compare to each other?”.  Translation also happens from the perspective of analysis to the perspective of data; in terms of availability, quality, completeness, format, volume, etc. By implication, we are proposing that most working organizations — small and large, for profit and non-profit, domestic and international, and in the majority of industries — can be described for analytical purposes using this collection of Subjects.  This is a bold claim, but simplified articulation of complexity is one of the primary goals of sensemaking frameworks such as this one.  (And, yes, this is in fact a framework for making sense of sensemaking as a category of activity - but we’re not considering the recursive aspects of this exercise at the moment.) Compellingly, we can place the collection of subjects on a single continuum — we call it the Sensemaking Spectrum — that simply and coherently illustrates some of the most important relationships between the different types of Subjects, and also illuminates several of the fundamental dynamics shaping business analytics as a domain.  As a corollary, the Sensemaking Spectrum also suggests innovation opportunities for products and services related to business analytics. The first illustration below shows Subjects arrayed along the Sensemaking Spectrum; the second illustration presents examples of each kind of Subject.  Subjects appear in colors ranging from blue to reddish-orange, reflecting their place along the Spectrum, which indicates whether a Subject addresses more the viewpoint of systems and data (Data centric and blue), or people (User centric and orange).  This axis is shown explicitly above the Spectrum.  Annotations suggest how Subjects align with the three significant perspectives of Data, Analysis, and Business that shape business analytics activity.  This rendering makes explicit the translation and bridging function of Analysts as a role, and analysis as an activity. Subjects are best understood as fuzzy categories [http://georgelakoff.files.wordpress.com/2011/01/hedges-a-study-in-meaning-criteria-and-the-logic-of-fuzzy-concepts-journal-of-philosophical-logic-2-lakoff-19731.pdf], rather than tightly defined buckets.  For each Subject, we suggest some of the most common examples: Entities may be physical things such as named products, or locations (a building, or a city); they could be Concepts, such as satisfaction; or they could be Relationships between entities, such as the variety of possible connections that define linkage in social networks.  Likewise, Events may indicate a time and place in the dictionary sense; or they may be Transactions involving named entities; or take the form of Signals, such as ‘some Measure had some value at some time’ - what many enterprises understand as alerts.   The central story of the Spectrum is that though consumers of analytical insights (represented here by the Business perspective) need to work in terms of Subjects that are directly meaningful to their perspective — such as Themes, Plans, and Goals — the working realities of data (condition, structure, availability, completeness, cost) and the changing nature of most discovery efforts make direct engagement with source data in this fashion impossible.  Accordingly, business analytics as a domain is structured around the fundamental assumption that sense making depends on analytical transformation of data.  Analytical activity incrementally synthesizes more complex and larger scope Subjects from data in its starting condition, accumulating insight (and value) by moving through a progression of stages in which increasingly meaningful Subjects are iteratively synthesized from the data, and recombined with other Subjects.  The end goal of  ‘laddering’ successive transformations is to enable sense making from the business perspective, rather than the analytical perspective.Synthesis through laddering is typically accomplished by specialized Analysts using dedicated tools and methods. Beginning with some motivating question such as seeking opportunities to increase the efficiency (a Theme) of fulfillment processes to reach some level of profitability by the end of the year (Plan), Analysts will iteratively wrangle and transform source data Records, Values and Attributes into recognizable Entities, such as Products, that can be combined with Measures or other data into the Events (shipment of orders) that indicate the workings of the business.  More complex Subjects (to the right of the Spectrum) are composed of or make reference to less complex Subjects: a business Process such as Fulfillment will include Activities such as confirming, packing, and then shipping orders.  These Activities occur within or are conducted by organizational units such as teams of staff or partner firms (Networks), composed of Entities which are structured via Relationships, such as supplier and buyer.  The fulfillment process will involve other types of Entities, such as the products or services the business provides.  The success of the fulfillment process overall may be judged according to a sophisticated operating efficiency Model, which includes tiered Measures of business activity and health for the transactions and activities included.  All of this may be interpreted through an understanding of the operational domain of the businesses supply chain (a Domain).   We'll discuss the Spectrum in more depth in succeeding posts.

    Read the article

  • links for 2011-01-04

    - by Bob Rhubart
    Webcasts (tags: ping.fm) Five Key Trends in Enterprise 2.0 for 2011 (Oracle Enterprise 2.0 Blog) Kellsey Ruppel shares insight from Oracle's Andy MacMillan. (tags: oracle otn enterprise2.0) Victor Bax: Lost in Service Oriented Architecture? "SOA is a concept, no more, no less. SOA is not a technology, or a piece of software. It is an architecture, a model." - Victor Bax (tags: oracle soa) Jan-Leendert: Oracle 11g SOA Suite read multi record data from csv file with the file adapter (master-detail) "The file adapter is a very powerlful tool to read files with structured data. Most of the time you will read simple csv files with one record per row. But what if your csv file contains multiple records with different types?" - Jan-Leendert (tags: oracle soa soasuite) @myfear: Five ways to know how your data looked in the past. Entity Auditing. "Whatever requirements you have. I can promise you, that it will never be a simple solution. In general it's best to evaluate your purpose for auditing in detail." - Oracle ACE Director Markus Eisele (tags: oracle otn oracleace java) @fteter: Buffing Up The Crystal Ball "While I'm already tired of seeing these types of posts (I'm writing on New Year's Day), I'm also feeling guilty about not making my own set of predictions." - Oracle ACE Director Floyd Teter (tags: oracle otn oracleace ec2 cloud fusionmiddleware) @bex: ECM New Year's Resolutions "Happy new year! Most people use the first post of the year to go over their own blog statistics of popular posts... but since my blog's fiscal year ends in April, I decided to do new years resolutions instead." - Oracle ACE Director Bex Huff (tags: oracle otn oracleace ecm enterprise2.0) Izaak de Hullu: Embedded Java in a 11g BPEL process "In an earlier blog my colleague Peter Ebell explained how you can create an extension of com.collaxa.cube.engine.ext.BPELXExecLet to do your coding in a regular Java environment so you have code completion and validation..." - Izaak de Hullu (tags: oracle otn bpel java soa) @gschmutz: Cannot access EM console after installing SOA Suite 11g PS2 Oracle ACE Director Guido Schmutz encounters a problem and shares the solution. (tags: oracle otn oracleace soa soasuite)

    Read the article

  • Business Strategy - Google Case Study

    Business strategy defined by SMBTN.com is a term used in business planning that implies a careful selection and application of resources to obtain a competitive advantage in anticipation of future events or trends. In more general terms business strategy is positioning a company so that it has the greatest competitive advantage over others in the markets and industries that they participate in. This process involves making corporate decisions regarding which markets to provide goods and services, pricing, acceptable quality levels, and how to interact with others in the marketplace. The primary objective of business strategy is to create and increase value for all of its shareholders and stakeholders through the creation of customer value. According to InformationWeek.com, Google has a distinctive technology advantage over its competitors like Microsoft, eBay, Amazon, Yahoo. Google utilizes custom high-performance systems which are cost efficient because they can scale to extreme workloads. This hardware allows for a huge cost advantage over its competitors. In addition, InformationWeek.com interviewed Stephen Arnold who stated that Google’s programmers are 50%-100% more productive compared to programmers working for their competitors.  He based this theory on Google’s competitors having to spend up to four times as much just to keep up. In addition to Google’s technological advantage, they also have developed a decentralized management schema where employees report directly to multiple managers and team project leaders. This allows for the responsibility of the technology department to be shared amongst multiple senior level engineers and removes the need for a singular department head to oversee the activities of the department.  This is a unique approach from the standard management style. Typically a department head like a CIO or CTO would oversee the department’s global initiatives and business functionality.  This would then be passed down and administered through middle management and implemented by programmers, business analyst, network administrators and Database administrators. It goes without saying that an IT professional’s responsibilities would be directed by Google’s technological advantage and management strategy.  Simply because they work within the department, and would have to design, develop, and support the high-performance systems and would have to report multiple managers and project leaders on a regular basis. Since Google was established and driven by new and immerging technology, all other departments would be directly impacted by the technology department.  In fact, they would have to cater to the technology department since it is a huge driving for in the success of Google. Reference: http://www.smbtn.com/smallbusinessdictionary/#b http://www.informationweek.com/news/software/linux/showArticle.jhtml?articleID=192300292&pgno=1&queryText=&isPrev=

    Read the article

  • TechEd 2010 Day Three: The Database Designer (Isn't)

    - by BuckWoody
    Yesterday at TechEd 2010 here in New Orleans I worked the front-booth, answering general SQL Server questions for the masses. I was actually a little surprised to find most of the questions I got were from folks that wanted to know more about Stream Insight and Master Data Services. In past conferences I've been asked a lot of "free consulting" questions, about problems folks have had from older products. I don't mind that a bit - in fact, I'm always happy to help in any way I can. But this time people are really interested in the new features in the product, and I like that they are thinking ahead, not just having to solve problems in production. My presentation was on "Database Design in an Hour". We had the usual fun, and SideShow Bob made an appearance - I kid you not. The guy in the back of the room looked just like Sideshow Bob, so I quickly held a "bes thair" contest, and he won. Duing the presentation, I explain the tools you can use to design databases. I also explain that the "Database Designer" tool in SQL Server Management Studio (SSMS) isn't truly a desinger - it uses non-standard notation, doesn't have a meta-data dictionary, and worst of all, it works at the physical level. In other words, whatever you do in SSMS will automatically change the field/table/relationship structures in the database. We fixed this in SSMS 2008 and higher by adding an option to block that, but the tool is not a good design function nonetheless. To be fair, no one I know of at Microsoft recommends that it is - but I was shocked to hear so many developers in the room defending it as a good tool. I think the main issue for someone who doesn't have to work with Relational Systems a great deal is that it can be difficult to figure out Foreign Keys. The syntax makes them look "backwards", so it's just easier to grab a field and place it on the table you want to point to. There are options. You can download a couple of free tools (CA has a community edition of ER-WIN, Quest has one, and Embarcadero also has one) and if you design more than one or two databases a year, it may be worth buying a true design tool. For years I used Visio, but we changed it so that it doesn't forward-engineer (create the DDL) any more, so it isn't a true design tool either. So investigate those free and not-so-free tools. You'll find they help you in your job - but stay away from the Database Designer in SSMS. Or I'll send Sideshow Bob over there to straighten you out. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Certification Notes: 70-583 Designing and Developing Windows Azure Applications

    - by BuckWoody
    It’s time for another certification, and we’ve just release the 70-583 exam on Windows Azure. I’ve blogged my “study plans” here before on other certifications, so I thought I would do the same for this one. I’ll also need to take exam 70-513 and 70-516; but I’ll post my notes on those separately. None of these are “brain dumps” or any questions from the actual tests - just the books, links and notes I have from my studies. I’ll update these references as I’m studying, so bookmark this site and watch my Twitter and Facebook posts for when I’ll update them, or just subscribe to the RSS feed. A “Green” color on the check-block means I’ve done that part so far, red means I haven’t. First, I need to refresh my memory on some basic coding, so along with the Azure-specific information I’m reading the following general programming books: Introducing Microsoft .NET (Pro-Developer): http://www.amazon.com/Introducing-Microsoft-Pro-Developer-David-Platt/dp/0735619182/ref=sr_1_1?s=books&ie=UTF8&qid=1296339237&sr=1-1 Head First C#, 2E: A Learner's Guide to Real-World Programming with Visual C# and .NET: http://www.amazon.com/Head-First-2E-Real-World-Programming/dp/1449380344/ref=sr_1_1?ie=UTF8&qid=1296339176&sr=8-1 Microsoft Visual C# 2008 Step by Step : http://www.amazon.com/Microsoft-Visual-2008-Step/dp/0735624305/ref=sr_1_1?s=books&ie=UTF8&qid=1296339208&sr=1-1 c The first place to start is at the official site for the certification. That’s here: http://www.microsoft.com/learning/en/us/Exam.aspx?ID=70-583&Locale=en-us c On that page you’ll find several resources, and the first you should follow is the “Save to my learning” so you have a place to track everything. Then click the “Related Learning Plans” link and follow the videos and read the documentation in each of those bullets. There are six areas on the learning plan that you should focus on - make sure you open the learning plan to drill into the specifics. c Designing Data Storage Architecture (18%) Books I’m Reading: Links: My Notes: c Optimizing Data Access and Messaging (17%) Books I’m Reading: Links: My Notes: c Designing the Application Architecture (19%) Books I’m Reading: Links: My Notes: c Preparing for Application and Service Deployment (15%) Books I’m Reading: Links: My Notes: c Investigating and Analyzing Applications (16%) Books I’m Reading: Links: My Notes: c Designing Integrated Solutions (15%) Books I’m Reading: Links: My Notes:

    Read the article

  • New Source Database Added for EBS 12 + 11gR2 Transportable Tablespaces

    - by John Abraham
    The Transportable Tablespaces (TTS) process was originally certified for the migration of E-Business Suite R12 databases going from a source database of 11gR1 or 11gR2 to a target of 11gR2. This requirement has now been expanded to include a source database of 10gR2 (10.2.0.5) - this will potentially save time for existing 10gR2 customers as they can remove on a crucial upgrade step prior to performing the platform migration. The migration process requires an updated Controlled patch delivered by the Oracle E-Business Suite Platform Engineering team, i.e. it requires a password obtainable from Oracle Support. We released the patch in this manner to gauge uptake, and help identify and monitor any customer issues due to the nature of this technology. This patch has been updated to now include supporting 10gR2 as a source database. Does it meet your requirements?Note that for migration across platforms of the same "endian" format, users are advised to use the Transportable Database (TDB) migration process instead for large databases. The "endian-ness" target platforms can be verified by querying the view V$DB_TRANSPORTABLE_PLATFORM using SQL*Plus (connected as sysdba) on the source platform:SQL>select platform_name from v$db_transportable_platform;If the intended target platform does not appear in the output, it means that it is of a different endian format from the source. Consequently. database migration will need to be performed via Transportable Tablespaces (for large databases) or export/import.The use of Transportable Tablespaces can greatly speed up the migration of the data portion of the database. However, it does not affect metadata, which must still be migrated using export/import. We recommend that users initially perform a test migration on their database, using export/import with the 'metrics=y' parameter. This will help identify the relative amounts of data and metadata, and provide a basis for assessing likely gains in timing. In general, the larger the amount of data (compared to metadata), the greater the reduction in downtime that can be expected from using TTS as a migration process. For smaller databases or for those that have relatively small data compared to metadata, TTS will not be as beneficial for cross endian migration and the use of export/import (datapump) for the whole database is recommended. Where can I find more information? Using Transportable Tablespaces to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 11g Release 2 Enterprise Edition (My Oracle Support Document 1311487.1) Oracle Database Administrator's Guide 11g Release 2 (11.2) Related Articles Database Migration using 11gR2 Transportable Tablespaces Now Certified for EBS 12 New Source Databases Added for Transportable Tablespaces + EBS 11i 10gR2 Transportable Tablespaces Certified for EBS 11i Migrating E-Business Suite Release 11i Databases Between Platforms Migrating E-Business Suite Release 12 Databases Between Platforms

    Read the article

  • Using ext4 in VMware machine

    First of all, using a journaling filesystems like NTFS, ext4, XFS, or JFS (not to name all of them) is a very good idea and nowadays unthinkable not to do. Linux offers a good variety of different option as journaling filesystem for your system. Since years I am using SGI's XFS and I am pretty confident with stability, performance and liability of the system. In earlier years I had to struggle with incompatibilities between XFS and the boot loader. Using an ext2 formatted /boot solved this issue. But, wow, that is ages ago! Lately, I had to setup a fresh Lucid Lynx (Ubuntu 10.04 LTS) system for a change of our internal groupware / messaging system. Therefore, I fired up a new virtual machine with almost standard configuration in VMware Server and run through our network-based PXE boot and installation procedure. At a certain step in this process, Ubuntu asks you about the partitioning of your hard drive(s). Honestly, I have to say that only out of curiousity I sticked to the "default" suggestion and gave my faith and trust into the Ubuntu installation routine... Resulting to have an ext4 based root mount point ( / ). The rest of the installation went on without further concerns or worries. Note:I really can't remember why I chose to go away from my favourite... Well, it should turn out to be the wrong decision after all. Ok, let's continue the story about ext4 in a VMware based virtual machine. After some hours installing additional packages and configuring the new system using LDAP for general authentication and login, I had an "out-of-the-box" usable enterprise messaging system based on Zarafa 6.40 Community Edition inclusive proper SSL-based Webaccess interface and Z-Push extension for ActiveSync with my Nokia mobile. Straightforward and pretty nice for the time spent on the setup. Having priority on other tasks I let the system just running and didn't pay any further attention at all. Until I run into an upgrade of "Mail for Exchange" on Symbian OS. My mobile did not bother me at all with the upgrade and everything went smooth, but trying to re-establish the ActiveSync connection to the Zarafa messaging system resulted in a frustating situation. So, I shifted my focus back to the Linux system and I was amazed to figure out that the root had been remounted readonly due to hard drive failures or at least ext4 reported errors. Firing up Google only confirmed my concerns and it seems that using ext4 for VMware based virtual machines does not look like a stable and reliable candidate to me. You might consider reading those external resources: ext4 fs corruption under VMWare Server 2.01Bug #389555 - ext4 filesystem corruption Well, I learned my lesson and ext{2|3|4} based filesystems are not going to be used on any of my Linux systems or customer installations in the future. Addendum: I did not try this setup in other virtualization environments like VirtualBox, qemu, kvm, Xen, etc.

    Read the article

  • BI&EPM in Focus April 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    General News Oracle OpenWorld call for papers now open, now through April 9 (link) Oracle Announces Availability of Oracle Exalytics In-Memory Machine (link) Oracle EPM and BI Support Newsletter Current Edition - Volume 3 : March 2012 (link) Customers Asiana Airlines Improves Passenger Management with Near-Real-Time Reservation and Ticketing Information  Centraal Boekhuis Delivers Faster with Oracle BI 11g Essatto Software Speeds Data Aggregation Tenfold; Integrates BI, Performance Management, and Data Warehousing for Midsize Businesses Grupo WTorre Supports Management's Decision-Making with OBIEE, Ensuring Uniform, Reliable, and Consistent Data Indian Overseas Bank Cuts Planning Schedule by 45 Worker Days per Year, Assesses Market Risk Instantly with Business Intelligence System Kentucky Community and Technical College System Enables Data-Driven Decision-Making Using Integrated System with Management Dashboards National Australia Bank Achieves 200% ROI, Improves Data Quality and Reporting Integrity with Oracle Hyperion DRM R.L. Polk & Co. Enhances Business Intelligence Capabilities, Optimizes System Performance with Extreme Analytics Machine Test ResCare, Inc. Transforms Reporting to Improve Healthcare Service Performance with Oracle Business Analytics  Rochester City School District Uses OBIEE to Track Student Achievement, Identify Areas for Improvement, Accelerate Reporting  Société Générale Standardizes, Accelerates, and Improves Budget Planning Accuracy across Global Enterprise The State Accounting Office of Georgia Integrates Financial Information, Shortens Financial Closings and Streamlines Reporting across 175 Organizations   Events 4-day Oracle Real-Time Decisions Hands-on Technical Workshop for Partners (PTS, Free) May 14-17, 2012: Colombes, Paris, France Nordic events : “Latest Release of Oracle Hyperion EPM and BI Suites Helps Organizations Plan through Uncertainty, Improve Decision-Making and Meet Regulatory Requirements” (April 17, Sweden | April 18, Norway | April 19, Denmark | April 24, Finland) Webcast Replay from Balaji Yelamanchili and Paul Rodwick: “Analytics Without Limits - The Latest on Oracle Exalytics In-Memory Machine and Oracle Business Intelligence”  (link)  Wednesday, April 04, 2012: Business Analytics launch webcast: Invite your customers to register (link) Big Data Online Forum now available on Demand (link)  Enterprise Performance Management Webcast Replay: Accurate Forecasting within the Business Planning Cycle (link) Oracle Hyperion Profitability and Cost Management (HPCM) Master Support Note (link) Business  Intelligence Whitepaper: Driving Innovation Through Analytics (link) Gartner: CIOs Identify BI as the No. 1 Technology Priority for 2012 (link) Webcast Replay: Exalytics in Action: Airlines, US Census and Federal Spending Demo Applications  (link) NEWLY RELEASED Walk-in Video for Exalytics - Use This to Start Customer/Partner Meetings! (link) IDC Insight Paper: “Oracle's All-Out Assault on the Big Data Market: Offering Hadoop, R, Cubes, and Scalable IMDB in Familiar Packages” (link) System Requirements and Supported Platforms for Oracle Business Intelligence Suite Enterprise Edition 11gR1 Certification Matrix now published to include OBIEE 11.1.1.6.0 (link) Maintenance Release Guide (List of Bugs Fixed) for Oracle Business Intelligence Enterprise Edition (OBIEE) 11.1.1.6.0  (link) OBIEE 11.1.1.6: Is OBIEE 11.1.1.6 Certified With OBI Apps 7.9.6.3?  (link) Information Center: Troubleshooting Oracle Business Intelligence Applications (support login req'd)  (link)      

    Read the article

  • ArchBeat Link-o-Rama for November 8, 2012

    - by Bob Rhubart
    Webcast: Meeting Customer Expectations in the New Age of Retail Keep your eye on this live webcast as Sanjeev Sharma (Principal Product Director, Oracle Exalogic), Kelly Goetsch (Senior Principal Product Manager, Oracle Commerce), and Dan Conway (Senior Product Manager, Oracle Retail) offer real-world examples of business value derived by running customer-facing applications on Oracle Engineered Systems. Live, Thursday Nov 8, 10am PT/ 1pm ET. Solving Big Problems in Our 21st Century Information Society | Irving Wladawsky-Berger "I believe that the kind of extensive collaboration between the private sector, academia and government represented by the Internet revolution will be the way we will generally tackle big problems in the 21st century. Just as with the Internet, governments have a major role to play as the catalyst for many of the big projects that the private sector will then take forward and exploit. The need for high bandwidth, robust national broadband infrastructures is but one such example." — Irving Wladawsky-Berger SOA Still Not Dead: Ratification of Governance Standard Highlights SOA’s Continued Relevance So just about the time I dig into Google Trends to learn that the conversation about governance peaked in 2004, along comes all this InfoQ article by Richard Seroter. And of course you've already listened to the OTN Archbeat Podcast about governance, right? Right? Implications of Java 6 End of Public Updates for Oracle E-Business Suite Users | Steven Chan The short version is: "Nothing will change for EBS users after February 2013." According to Steven Chan, "EBS users will continue to receive critical bug fixes and security fixes as well as general maintenance for Java SE 6." You'll find additional information on Steven's blog. ADF Mobile Custom Javascript – iFrame Injection | John Brunswick The ADF Mobile Framework provides a range of out of the box components to add within your AMX pages, according to John Brunswick. But what happens when "an out of the box component does not directly fulfill your development need? What options are available to extend your application interface?" John has an answer. How Data and BPM are married to get the right information to the right people at the right time | Leon Smiers "Business Process Management…supports a large group of stakeholders within an organization, all with different needs," says Oracle ACE Leon Smiers. "End-to-end processes typically run across departments, stakeholders and applications, and can often have a long life-span. So how do organizations provide all stakeholders with the information they need?" Leon provides answers in this post. Thought for the Day "(When) asking skilled architects…what they do when confronted with highly complex problems…(they) would most likely answer, 'Just use Common Sense.' (A) better expression than 'common sense' is 'contextual sense' — a knowledge of what is reasonable within a given content. Practicing architects through eduction, experience and examples accumulate a considerable body of contextual sense by the time they're entrusted with solving a system-level problem…" — Eberhardt Rechtin (January 16, 1926 – April 14, 2006) Source: SoftwareQuotes.com

    Read the article

  • Ubuntu 11.10 doesn't detect external usb hard drive

    - by Andrew
    I have been batting with this issue for a bit and cannot find the answer to it. So the Dmesg see's the device, being Symwave WDC WD64.... media@Media-pc:~$ dmesg | tail -n 20 [78678.719497] scsi 10:0:0:0: Direct-Access Generic- USB3.0 CRW -0 1.00 PQ: 0 ANSI: 0 CCS [78678.725621] scsi 10:0:0:1: Direct-Access Generic- USB3.0 CRW -1 1.00 PQ: 0 ANSI: 0 CCS [78684.073837] scsi 11:0:0:0: Direct-Access SYMWAVE WDC WD6400AAKS-0 3B01 PQ: 0 ANSI: 4 [78691.008126] scsi 11:0:0:0: uas_eh_abort_handler tag 0 [78691.008139] scsi 11:0:0:0: uas_eh_device_reset_handler tag 0 [78691.008147] scsi 11:0:0:0: uas_eh_target_reset_handler tag 0 [78691.008154] scsi 11:0:0:0: uas_eh_bus_reset_handler tag 0 [78691.080307] usb 2-2.4: reset high speed USB device number 9 using ehci_hcd [78691.221427] scsi 11:0:0:0: Device offlined - not ready after error recovery [78691.221498] scsi 11:0:0:0: rejecting I/O to offline device [78691.221519] scsi 11:0:0:0: rejecting I/O to offline device [78691.222952] scsi 11:0:0:1: Enclosure SYMWAVE SES 3B01 PQ: 0 ANSI: 4 [78691.223156] scsi 11:0:0:2: uas_sense_old: urb length 26 disagrees with IU sense data length 510, using 18 bytes of sense data [78691.225061] sd 11:0:0:0: Attached scsi generic sg3 type 0 [78691.225344] ses 11:0:0:1: Attached Enclosure device [78691.225495] ses 11:0:0:1: Attached scsi generic sg4 type 13 [78691.226266] sd 10:0:0:0: Attached scsi generic sg5 type 0 [78691.226653] sd 10:0:0:1: Attached scsi generic sg6 type 0 [78691.241647] sd 10:0:0:0: [sdd] Attached SCSI removable disk [78691.243832] sd 10:0:0:1: [sde] Attached SCSI removable disk It looks like it attaches sdd and sde. Now when i look in the disk utility it shows "Hard disk Symwave WD6400AAKS-0 device /dev/sdc doesn't show any other info then that, if i format, it says that it cannot open /dev/sdc no device or address error. Underneeth the device it does show two general usb3.0 CRW that are sdd and sde. Now if I do a fdisk -l it doesn't show the device: media@Media-pc:~$ sudo fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000247de Device Boot Start End Blocks Id System /dev/sda1 * 2048 152176639 76087296 83 Linux /dev/sda2 152178686 156301311 2061313 5 Extended /dev/sda5 152178688 156301311 2061312 82 Linux swap / Solaris Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x948fc822 Device Boot Start End Blocks Id System /dev/sdb1 63 1953520064 976760001 7 HPFS/NTFS/exFAT So now I am confused. Any ideas how I get fdisk to see the device?

    Read the article

  • WebLogic Server Weekly for March 26th, 2012: WLS 1211 Update, Java 7 Certification, Galleria, WebLogic for DBAs, REST and Enterprise Architecture, Singleton Services

    - by Steve Button
    WebLogic Server 12c Certified with Java 7 for Production Use WebLogic Server 12c (12.1.1) has been certified with JDK 7 for development usage since December and we have now completed JDK 7 certification for use with production systems. In doing so, we have updated the WebLogic Server 12c (12.1.1) distributions incorporating fixes associated with JDK 7 support as well as some bundled patches that address several issues that have been discovered since the initial release. These updated distributions are available for download from OTN and will be beneficial for all WebLogic Server 12c (12.1.1) users in general. What's New Release Notes Download Here! Updated Oracle WebLogic Server 12.1.1.0 distribution Never one to miss a trick, Markus Eisele was one of the first to notice the WebLogic Server 12c update and post a blog about it. Sources told me that as of Friday last week you have an updated version of WebLogic Server 12c on OTN. http://blog.eisele.net/2012/03/updated-oracle-weblogic-server-12110.html Using WebLogic Server 12c with Java 7 - Video To illustrate the use of Java 7 with WebLogic Server 12c, I put together a screen cam showing the creation of a domain using Java 7 and then build and deploy a simple web application that uses Java 7 syntax to show it working. Ireland OUG Presentation: WebLogic for DBAs Simon Haslam posted his slides from a presentation he gave Dublin on 21/3/12 at the OUG Ireland conference. In this presentation, he explains the core concepts and ideas behind WebLogic Server, walks through an installation and offers some tips and common gotcha's to avoid. Simon also covers some aspects of installing and use Enterprise Manager 12c. Note: I usually install the JVM and use the generic .jar installer rather than using an installer bundled with a JVM. http://www.slideshare.net/Veriton/weblogic-for-dbas-10h Slightly Retro: Jeff West on Enterprise Architecure and REST In this weeks flashback, we look at Jeff West's blog from early 2011 where he provides some thoughtful opinions on enterprise architecture and innovation, then jumps into his views on REST. After I progressed in my career and did more team-leading and architecture type roles I was ‘educated’ on what it meant to have Asynchronous and Long-Running processes as part of your Enterprise Application architecture. If I had a synchronous process then I needed a thread available to service the request and then provide the response. https://blogs.oracle.com/jeffwest/entry/weblogic_integration_wli_web_services_and_soap_and_rest_part_1 Starting Managed Servers without an Administration Server using Node Manager and WLST Blogger weblogic-tips shows how to start a managed server without going through the Administration Server, using the Node Manager and WLST. Connect WLST to a Node Manager by entering the nmConnect command. http://www.weblogic-tips.com/2012/02/18/starting-managed-servers-without-an-administration-server-using-node-manager-and-wlst/ Using WebLogic Server Singleton Services WebLogic Server has supported the notion of a Singleton Service for a number of releases, in which WebLogic Server will maintain a single instance of a configured singleton service on one managed server within a cluster. This blog demonstrates how the singleton service can be accessed and used from applications deployed on the cluster. http://buttso.blogspot.com.au/2012/03/weblogic-server-singleton-services.html

    Read the article

  • Domain Models (PHP)

    - by Calum Bulmer
    I have been programming in PHP for several years and have, in the past, adopted methods of my own to handle data within my applications. I have built my own MVC, in the past, and have a reasonable understanding of OOP within php but I know my implementation needs some serious work. In the past I have used an is-a relationship between a model and a database table. I now know after doing some research that this is not really the best way forward. As far as I understand it I should create models that don't really care about the underlying database (or whatever storage mechanism is to be used) but only care about their actions and their data. From this I have established that I can create models of lets say for example a Person an this person object could have some Children (human children) that are also Person objects held in an array (with addPerson and removePerson methods, accepting a Person object). I could then create a PersonMapper that I could use to get a Person with a specific 'id', or to save a Person. This could then lookup the relationship data in a lookup table and create the associated child objects for the Person that has been requested (if there are any) and likewise save the data in the lookup table on the save command. This is now pushing the limits to my knowledge..... What if I wanted to model a building with different levels and different rooms within those levels? What if I wanted to place some items in those rooms? Would I create a class for building, level, room and item with the following structure. building can have 1 or many level objects held in an array level can have 1 or many room objects held in an array room can have 1 or many item objects held in an array and mappers for each class with higher level mappers using the child mappers to populate the arrays (either on request of the top level object or lazy load on request) This seems to tightly couple the different objects albeit in one direction (ie. a floor does not need to be in a building but a building can have levels) Is this the correct way to go about things? Within the view I am wanting to show a building with an option to select a level and then show the level with an option to select a room etc.. but I may also want to show a tree like structure of items in the building and what level and room they are in. I hope this makes sense. I am just struggling with the concept of nesting objects within each other when the general concept of oop seems to be to separate things. If someone can help it would be really useful. Many thanks

    Read the article

  • How can I convert a 2D bitmap (Used for terrain) to a 2D polygon mesh for collision?

    - by Megadanxzero
    So I'm making an artillery type game, sort of similar to Worms with all the usual stuff like destructible terrain etc... and while I could use per-pixel collision that doesn't give me collision normals or anything like that. Converting it all to a mesh would also mean I could use an existing physics library, which would be better than anything I can make by myself. I've seen people mention doing this by using Marching Squares to get contours in the bitmap, but I can't find anything which mentions how to turn these into a mesh (Unless it refers to a 3D mesh with contour lines defining different heights, which is NOT what I want). At the moment I can get a basic Marching Squares contour which looks something like this (Where the grid-like lines in the background would be the Marching Squares 'cells'): That needs to be interpolated to get a smoother, more accurate result but that's the general idea. I had a couple ideas for how to turn this into a mesh, but many of them wouldn't work in certain cases, and the one which I thought would work perfectly has turned out to be very slow and I've not even finished it yet! Ideally I'd like whatever I end up using to be fast enough to do every frame for cases such as rapidly-firing weapons, or digging tools. I'm thinking there must be some kind of existing algorithm/technique for turning something like this into a mesh, but I can't seem to find anything. I've looked at some things like Delaunay Triangulation, but as far as I can tell that won't correctly handle concave shapes like the above example, and also wouldn't account for holes within the terrain. I'll go through the technique I came up with for comparison and I guess I'll see if anyone has a better idea. First of all interpolate the Marching Squares contour lines, creating vertices from the line ends, and getting vertices where lines cross cell edges (Important). Then, for each cell containing vertices create polygons by using 2 vertices, and a cell corner as the 3rd vertex (Probably the closest corner). Do this for each cell and I think you should have a mesh which accurately represents the original bitmap (Though there will only be polygons at the edges of the bitmap, and large filled in areas in between will be empty). The only problem with this is that it involves lopping through every pixel once for the initial Marching Squares, then looping through every cell (image height + 1 x image width + 1) at least twice, which ends up being really slow for any decently sized image...

    Read the article

  • Updates about Multidimensional vs Tabular #ssas #msbi

    - by Marco Russo (SQLBI)
    I recently read the blog post from James Serra Tabular model: Not ready for prime time? (read also the comments because there are discussions about a few points raised by James) and the following post from Christian Wade Multidimensional or Tabular. In the last 2 years I worked with many companies adopting Tabular in different scenarios and I agree with some of the points expressed by James in his post (especially about missing features in Tabular if compared to Multidimensional), but I strongly disagree in others. In general, Tabular is a good choice for a new project when: the development team does not have a good knowledge of Multidimensional and MDX (DAX is faster to learn, not so easy as it is sold by MS, but definitely easier than MDX) you don’t need calculations based on hierarchies (common in certain financial applications, but not so common as it could seem) there are important calculations based on distinct count measures there are complex calculations based on many-to-many relationships Until now, I never suggested to migrate an existing Multidimensional model to a Tabular one. There should be very important reasons for that, such as performance issues in distinct count and many-to-many relationships that cannot be easily solved by optimizing the Multidimensional model, but I still never encountered this scenario. I would say that in 80% of the new projects, you might use either Multidimensional or Tabular and the real difference is the time-to-market depending on the skills of the development team. So it’s not strange that who is used to Multidimensional is not moving to Tabular, not getting a particular benefit from the new model unless specific requirements exist. The recent DAXMD feature that allows using SharePoint Power View on Multidimensional is a really important one, even if I’d like having also Excel Power View enabled for this scenario (this should be just a question of time). Another scenario in which I’m seeing a growing adoption of Tabular is in companies that creates models for their product/service and do that by using XMLA or Tabular AMO 2012. I am used to call them ISVs, even if those providing services cannot be really defined in this way. These companies are facing the multitenancy challenge with Tabular and even if this is a niche market, I see some potential here, because adopting Tabular seems a much more natural choice than Multidimensional in those scenario where an analytical engine has to be embedded to deliver one of the features of a larger product/service delivered to customers. I’d like to see other feedbacks in the comments: tell your story of choosing between Tabular and Multidimensional in a BI project you started with SQL Server 2012, thanks!

    Read the article

  • How do we keep dependent data structures up to date?

    - by Geo
    Suppose you have a parse tree, an abstract syntax tree, and a control flow graph, each one logically derived from the one before. In principle it is easy to construct each graph given the parse tree, but how can we manage the complexity of updating the graphs when the parse tree is modified? We know exactly how the tree has been modified, but how can the change be propagated to the other trees in a way that doesn't become difficult to manage? Naturally the dependent graph can be updated by simply reconstructing it from scratch every time the first graph changes, but then there would be no way of knowing the details of the changes in the dependent graph. I currently have four ways to attempt to solve this problem, but each one has difficulties. Nodes of the dependent tree each observe the relevant nodes of the original tree, updating themselves and the observer lists of original tree nodes as necessary. The conceptual complexity of this can become daunting. Each node of the original tree has a list of the dependent tree nodes that specifically depend upon it, and when the node changes it sets a flag on the dependent nodes to mark them as dirty, including the parents of the dependent nodes all the way down to the root. After each change we run an algorithm that is much like the algorithm for constructing the dependent graph from scratch, but it skips over any clean node and reconstructs each dirty node, keeping track of whether the reconstructed node is actually different from the dirty node. This can also get tricky. We can represent the logical connection between the original graph and the dependent graph as a data structure, like a list of constraints, perhaps designed using a declarative language. When the original graph changes we need only scan the list to discover which constraints are violated and how the dependent tree needs to change to correct the violation, all encoded as data. We can reconstruct the dependent graph from scratch as though there were no existing dependent graph, and then compare the existing graph and the new graph to discover how it has changed. I'm sure this is the easiest way because I know there are algorithms available for detecting differences, but they are all quite computationally expensive and in principle it seems unnecessary so I'm deliberately avoiding this option. What is the right way to deal with these sorts of problems? Surely there must be a design pattern that makes this whole thing almost easy. It would be nice to have a good solution for every problem of this general description. Does this class of problem have a name?

    Read the article

  • Simplifying C++11 optimal parameter passing when a copy is needed

    - by Mr.C64
    It seems to me that in C++11 lots of attention was made to simplify returning values from functions and methods, i.e.: with move semantics it's possible to simply return heavy-to-copy but cheap-to-move values (while in C++98/03 the general guideline was to use output parameters via non-const references or pointers), e.g.: // C++11 style vector<string> MakeAVeryBigStringList(); // C++98/03 style void MakeAVeryBigStringList(vector<string>& result); On the other side, it seems to me that more work should be done on input parameter passing, in particular when a copy of an input parameter is needed, e.g. in constructors and setters. My understanding is that the best technique in this case is to use templates and std::forward<>, e.g. (following the pattern of this answer on C++11 optimal parameter passing): class Person { std::string m_name; public: template <class T, class = typename std::enable_if < std::is_constructible<std::string, T>::value >::type> explicit Person(T&& name) : m_name(std::forward<T>(name)) { } ... }; A similar code could be written for setters. Frankly, this code seems boilerplate and complex, and doesn't scale up well when there are more parameters (e.g. if a surname attribute is added to the above class). Would it be possible to add a new feature to C++11 to simplify code like this (just like lambdas simplify C++98/03 code with functors in several cases)? I was thinking of a syntax with some special character, like @ (since introducing a &&& in addition to && would be too much typing :) e.g.: class Person { std::string m_name; public: /* Simplified syntax to produce boilerplate code like this: template <class T, class = typename std::enable_if < std::is_constructible<std::string, T>::value >::type> */ explicit Person(std::string@ name) : m_name(name) // implicit std::forward as well { } ... }; This would be very convenient also for more complex cases involving more parameters, e.g. Person(std::string@ name, std::string@ surname) : m_name(name), m_surname(surname) { } Would it be possible to add a simplified convenient syntax like this in C++? What would be the downsides of such a syntax?

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >