Search Results

Search found 27142 results on 1086 pages for 'control structure'.

Page 892/1086 | < Previous Page | 888 889 890 891 892 893 894 895 896 897 898 899  | Next Page >

  • I love it when a plan comes together

    - by DavidWimbush
    I'm currently working on an application so that our Marketing department can produce most of their own mailing lists without my having to get involved. It was all going well until I got stuck on the bit where the actual SQL query is generated but a rummage in Books Online revealed a very clean solution using some constructs that I had previously dismissed as pointless. Typically we want to email every customer who is in any of the following n groups. Experience shows that a group has the following definition: <people who have done A> [(AND <people who have done B>) | (OR <people who have done C>)] [APART FROM <people who have done D>] When doing these by hand I've been using INNER JOIN for the AND, UNION for the OR, and LEFT JOIN + WHERE D IS NULL for the APART FROM. This would produce two quite different queries: -- Old OR select  A.PersonID from  (   -- A   select  PersonID   from  ...   union  -- OR   -- C   select  PersonID   from  ...   ) AorC   left join  -- APART FROM   (   select  PersonID   from  ...   ) D on D.PersonID = AorC.PersonID where  D.PersonID is null -- Old AND select  distinct main.PersonID from  (   -- A   select  PersonID   from  ...   ) A   inner join  -- AND   (   -- B   select  PersonID   from  ...   ) B on B.PersonID = A.PersonID   left join  -- APART FROM   (   select  PersonID   from  ...   ) D on D.PersonID = A.PersonID where  D.PersonID is null But when I tried to write the code that can generate the SQL for any combination of those (along with all the variables controlling what each SELECT did and what was in all the optional bits of each WHERE clause) my brain started to hurt. Then I remembered reading about the (then new to me) keywords INTERSECT and EXCEPT. At the time I couldn't see what they added but I thought I would have a play and see if they might help. They were perfect for this. Here's the new query structure: -- The way forward select  PersonID from  (     (       (       -- A       select  PersonID       from  ...       )       union      -- OR        intersect  -- AND       (       -- B/C       select  PersonID       from  ...       )     )     except     (     -- D     select  PersonID     from  ...     )   ) x I can easily swap between between UNION and INTERSECT, and omit B, C, or D as necessary. Elegant, clean and readable - pick any 3! Sometimes it really pays to read the manual.

    Read the article

  • Overload Avoidance

    - by mikef
    A little under a year ago, Matt Simmons wrote a rather reflective article about his terrifying brush with stress-induced ill health. SysAdmins and DBAs have always been prime victims of work-related stress, but I wonder if that predilection is perhaps getting worse, despite the best efforts of Matt and his trusty side-kick, HR. The constant pressure from share-holders and CFOs to 'streamline' the workforce is partially to blame, but the more recent culprit is technology itself. I can't deny that the rise of technologies like virtualization, PowerCLI, PowerShell, and a host of others has been a tremendous boon. As a result, individual IT professionals are now able to handle more and more tasks and manage increasingly large and complex environments. But, without a doubt, this is a two-edged sword; The reward for competence is invariably more work. Unfortunately, SysAdmins play such a pivotal role in modern business that it's easy to see how they can very quickly become swamped in conflicting demands coming from different directions. However, that doesn't justify the ridiculous hours many are asked (or volunteer) to devote to their work. Admirably though their commitment is, it isn't healthy for them, it sets a dangerous expectation, and eventually something will snap. There are times when everyone needs to step up to the plate outside of 'normal' work hours, but that time isn't all the time. Naturally, with all that lovely technology, you can automate more and more of those tricky tasks to keep on top of the workload, but you are still only human. Clever though you may be, there is a very real limit to how far technology can take you. I'm not suggesting that you avoid these technologies, or deliberately aim for mediocrity; I'm just saying that you need to be more than just technically skilled (and Wesley Nonapeptide riffs on and around this topic in his excellent 'Telepathic Robot Drones' blog post). You need to be able to manage expectations, not just Exchange. Specifically, that means your own expectations of what you are capable of, because those come before everyone else's. After all, how can you keep your work-life balance under control, if you're the one setting the bar way too high? Talking to your manager, or discussing issues with your users, is only going to be productive if you have some facts to work with. "Know Thyself" is the first law of managing work overload, and this is obviously a skill which people develop over time; the fact that veteran Sysadmins exist at all is testament to this. I'd just love to know how you get to that point. Personally, I'm using RescueTime to keep myself honest, but I'm open to recommendations for better methods. Do you track your own time, do you have an intuitive sense of what is possible, or do you just rely on someone else to handle that all for you? Cheers, Michael

    Read the article

  • It's intellisense for SQL Server

    - by Nick Harrison
    It's intellisense for SQL Server Anyone who has ever worked with me, heard me speak, or read any of writings knows that I am a HUGE fan of Reflector.    By extension,  I am a big fan of Red - Gate   I have recently begun exploring some of their other offerings and came across this jewel. SQL Prompt is a plug in for Visual Studio and SQL Server Management Studio.    It provides several tools to make dealing with SQL a little easier for your friendly neighborhood developer. When you a query window in a database, the plugin kicks in and gathers the metadata for the database that you are in.    As you type a query, you get handy feedback like a list of tables after you type select.    You can select one of the tables, specify * and then tab to expand the select clause to include all of the columns from the selected table.    As you are building up the where clause, you are prompted by the names of columns in the selected tables. If you spend any time writing ad hoc queries or building stored procedures by hand, this can save you substantial time. If you are learning a new data model, this can greatly cut down on your frustration level. The other really cool thing here is Format SQL.   I have searched all over the place for a really good SQL formatter.    Badly formatted  SQL is so much harder to read than well formatted SQL.   Unfortunately, management studio offers no support for keeping your SQL well formatted.    There are many tools available to format your SQL.   Some work better than others.    Some don't work that well at all.   Most will give you some measure of control over how the formatted SQL looks.    SQL Prompt produces good results and is easy to configure. Sadly no tool is perfect, and what would we be without a wish list.    There are some features that I would like to see: Make it easier to paste SQL in and out of code.    Strip off string builder, etc Automate replacing hard coded values with bind variables or parameters In addition to reformatting SQL, which is a huge refactor, support for other SQL refactors would be nice.    Convert join to sub query and vice versa come to mind Wish list a side, this is a wonderful tool that easily saves me an hour or more on most weeks.

    Read the article

  • Oracle Utilities Application Framework V4.2.0.0.0 Released

    - by ACShorten
    The Oracle Utilities Application Framework V4.2.0.0.0 has been released with Oracle Utilities Customer Care And Billing V2.4. This release includes new functionality and updates to existing functionality and will be progressively released across the Oracle Utilities applications. The release is quite substantial with lots of new and exciting changes. The release notes shipped with the product includes a summary of the changes implemented in V4.2.0.0.0. They include the following: Configuration Migration Assistant (CMA) - A new data management capability to allow you to export and import Configuration Data from one environment to another with support for Approval/Rejection of individual changes. Database Connection Tagging - Additional tags have been added to the database connection to allow database administrators, Oracle Enterprise Manager and other Oracle technology the ability to monitor and use individual database connection information. Native Support for Oracle WebLogic - In the past the Oracle Utilities Application Framework used Oracle WebLogic in embedded mode, and now, to support advanced configuration and the ExaLogic platform, we are adding Native Support for Oracle WebLogic as configuration option. Native Web Services Support - In the past the Oracle Utilities Application Framework supplied a servlet to handle Web Services calls and now we offer an alternative to use the native Web Services capability of Oracle WebLogic. This allows for enhanced clustering, a greater level of Web Service standards support, enchanced security options and the ability to use the Web Services management capabilities in Oracle WebLogic to implement higher levels of management including defining additional security rules to control access to individual Web Services. XML Data Type Support - Oracle Utilities Application Framework now allows implementors to define XML Data types used in Oracle in the definition of custom objects to take advantage of XQuery and other XML features. Fuzzy Operator Support - Oracle Utilities Application Framework supports the use of the fuzzy operator in conjunction with Oracle Text to take advantage of the fuzzy searching capabilities within the database. Global Batch View - A new JMX based API has been implemented to allow JSR120 compliant consoles the ability to view batch execution across all threadpools in the Coherence based Named Cache Cluster. Portal Personalization - It is now possible to store the runtime customizations of query zones such as preferred sorting, field order and filters to reuse as personal preferences each time that zone is used. These are just the major changes and there are quite a few more that have been delivered (and more to come in the service packs!!). Over the next few weeks we will be publishing new whitepapers and new entries in this blog outlining new facilities that you want to take advantage of.

    Read the article

  • Uninstalling with Ubuntu Software Center doesn't work on Ubuntu 12.04.1 64bit

    - by likethesky
    Not sure if I'm doing something wrong, or if the .deb package I'm installing is broken in some way (I've built it, using NetBeans 7.2), or if indeed this is a bug in Software Center. When I install this particular 32-bit .deb on Ubuntu 10.04 LTS--all updates applied--(where it was built), GDebi shows it and has an 'Uninstall' button next to it. So it works fine to uninstall it there, via the GDebi GUI. However, when I install it on 12.04.1 LTS--all updates applied--it installs fine, but then does not show up in Ubuntu Software Center as available to be uninstalled. No combination of searching finds it. However, I can from the command line, do sudo apt-get purge javafxapplication1 and it finds it and deletes it. The same thing happens when I build a 64-bit .deb and attempt to install it to the same (64-bit AMD) or a different 64-bit Ubuntu 12.04.1 system. So it seems to be isolated to this NetBeans-generated .deb and the 64-bit AMD build (though I haven't tried it on a 32-bit 12.04.1 install yet). These are all on VirtualBox VMs, btw, if that matters. Any way to clean up my Software Center and see if it's something I've done to get it in this state? Could this behavior be due to how this particular .deb has been built? (It doesn't have an 'Installed-Size' control field, so I do get the "Package is of bad quality" warning when I install it--which I do by clicking 'Ignore and install' button.) If you want all the gory details about why this happening--a bug has been reported against NetBeans for this behavior here: http://javafx-jira.kenai.com/browse/RT-25486 (EDIT: Just to be clear, the app installs fine, runs fine, all works as intended--I just can't get that 'bad package' message to go away, and now... I also can't uninstall it via Software Center, but rather, need to use sudo apt-get purge to uninstall it, after it installs.) Thanks for any pointers. I'm happy to report this as a bug against Ubuntu Software Center/Centre too, if that's what it seems to be, just tell me where to do so (a link). I'm a relative Ubuntu, NetBeans, and JavaFX newbie, though a long-time programmer. If I report it as a bug, I'll try it on the 32-bit build of 12.04.1 as well. Also, if I should add any more detail to the bug reported against NetBeans above, let me know--or feel free to add it yourself to the bug report above, if you would like. Thanks again!

    Read the article

  • Dropped impression 25 days after restructure

    - by Hamid
    Our website is a non English property related website (moshaver.com) which is similar to rightmove.co.uk. On September 2012 our website was adversely affected by Panda causing our Google incoming clicks to drop from around 3000 clicks to less than a thousand. We were hoping that Google will eventually realize that we are not a spam website and things will get better. However, in August 2013 we were almost sure that we needed to do something, so we started to restructure our web content. We used the canonical tag to remove our search results and point to our listing pages, using the noindex tag to remove it from our listing pages which does not have any properties at the moment. We also changed title tags to more friendly ones, in addition to other changes. Our changes were effective on 10th August. As shown in the graph taken from Google Analytics Search Engine Optimization section, these changes has resulted in an increase in the number of times Google displayed our results in its search results. Our impressions almost doubled starting 15th August. However, as the graph shows, our CTR dropped from this date from around 15% to 8%. This might have been because of our changed title tags (so people were less likely to click on them), or it might be normal for increased impressions. This situation has continued up until 10th September, when our impressions decreased dramatically to less than a thousand. This is almost 30% of our original impressions (before website restructure) and 15% of the new impressions. At the same time our impressions has increased dramatically to around 50%. I have two theories for this increase. The first one is that these statistics are less accurate for lower impressions. The second one is that Google is now only displaying our results for queries directly related to our website (our name, our url), and not for general terms, such as "apartments in a specific city". The second theory also explains the dramatic decrease in impression as well. After digging the analytic data a little more, I constructed the following table. It displays the breakdown of our impressions, clicks and ctr in different Google products (web and image) and in total. What I understand from this table is that, most of our increased impressions after restructure were on the image search section. I don't think users of search would be looking for content in our website. Furthermore, it shows that the drop in our web search ctr, is as dramatic of the overall ctr (-30% in compare to -60%) . I thought posting it here might help you understand the situation better. Is it possible that Google has tested our new structure for 25 days, and then decided to decrease our impressions because of the the new low CTR? Or should we look for another factor? If this is the case, how long does it usually take for Google to give us another chance? It has been one month since our impressions has dropped.

    Read the article

  • Big Data – Operational Databases Supporting Big Data – Columnar, Graph and Spatial Database – Day 14 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Key-Value Pair Databases and Document Databases in the Big Data Story. In this article we will understand the role of Columnar, Graph and Spatial Database supporting Big Data Story. Now we will see a few of the examples of the operational databases. Relational Databases (The day before yesterday’s post) NoSQL Databases (The day before yesterday’s post) Key-Value Pair Databases (Yesterday’s post) Document Databases (Yesterday’s post) Columnar Databases (Tomorrow’s post) Graph Databases (Today’s post) Spatial Databases (Today’s post) Columnar Databases  Relational Database is a row store database or a row oriented database. Columnar databases are column oriented or column store databases. As we discussed earlier in Big Data we have different kinds of data and we need to store different kinds of data in the database. When we have columnar database it is very easy to do so as we can just add a new column to the columnar database. HBase is one of the most popular columnar databases. It uses Hadoop file system and MapReduce for its core data storage. However, remember this is not a good solution for every application. This is particularly good for the database where there is high volume incremental data is gathered and processed. Graph Databases For a highly interconnected data it is suitable to use Graph Database. This database has node relationship structure. Nodes and relationships contain a Key Value Pair where data is stored. The major advantage of this database is that it supports faster navigation among various relationships. For example, Facebook uses a graph database to list and demonstrate various relationships between users. Neo4J is one of the most popular open source graph database. One of the major dis-advantage of the Graph Database is that it is not possible to self-reference (self joins in the RDBMS terms) and there might be real world scenarios where this might be required and graph database does not support it. Spatial Databases  We all use Foursquare, Google+ as well Facebook Check-ins for location aware check-ins. All the location aware applications figure out the position of the phone with the help of Global Positioning System (GPS). Think about it, so many different users at different location in the world and checking-in all together. Additionally, the applications now feature reach and users are demanding more and more information from them, for example like movies, coffee shop or places see. They are all running with the help of Spatial Databases. Spatial data are standardize by the Open Geospatial Consortium known as OGC. Spatial data helps answering many interesting questions like “Distance between two locations, area of interesting places etc.” When we think of it, it is very clear that handing spatial data and returning meaningful result is one big task when there are millions of users moving dynamically from one place to another place & requesting various spatial information. PostGIS/OpenGIS suite is very popular spatial database. It runs as a layer implementation on the RDBMS PostgreSQL. This makes it totally unique as it offers best from both the worlds. Courtesy: mushroom network Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Hive. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • What does the `dmesg` error: "composite sync not supported" mean?

    - by M. Tibbits
    Question: I see [ 20.473125] composite sync not supported and several such entries when I run dmesg. What do they mean? Background: I'm trying to debug a problem where my laptop won't suspend. Since acpi seems happy and I can suspend easily from the command line, I've turned to tracking down all boot-up errors/warnings. So I run dmesg | grep not and, amongst other shtuff, I get: 728:[ 17.267120] composite sync not supported 733:[ 18.009061] composite sync not supported 740:[ 18.159289] registered panic notifier 749:[ 18.162500] vga16fb: not registering due to another framebuffer present 757:[ 18.598251] composite sync not supported 776:[ 20.473125] composite sync not supported 777:[ 20.932266] composite sync not supported 778:[ 28.350231] composite sync not supported 779:[ 28.924913] composite sync not supported 780:[ 35.480658] composite sync not supported And the full log for the few lines right around that first appearance (line 728) is listed at the bottom of my post (I'd happily include anything else). Any ideas what could be causing this? I've read several sites: Ubuntuforums #1 IRC Chat #1 One post talks about ??Adobe flash?? causing this error? Some others also suggest that it might be an nvidia related problem, but I've got a Dell Latitude D630 with an integrated Intel graphics -- so nvidia isn't the problem. [ 17.207142] phy0: Selected rate control algorithm 'minstrel' [ 17.207833] Registered led device: b43-phy0::tx [ 17.207849] Registered led device: b43-phy0::rx [ 17.207865] Registered led device: b43-phy0::radio [ 17.207927] Broadcom 43xx driver loaded [ Features: PL, Firmware-ID: FW13 ] [ 17.267120] composite sync not supported [ 17.415795] EXT4-fs (sda2): mounted filesystem with ordered data mode [ 17.602131] [drm] initialized overlay support [ 17.620201] input: DualPoint Stick as /devices/platform/i8042/serio1/input/input7 [ 17.641192] input: AlpsPS/2 ALPS DualPoint TouchPad as /devices/platform/i8042/serio1/input/input8 [ 18.009061] composite sync not supported [ 18.106042] pcmcia_socket pcmcia_socket0: cs: IO port probe 0x100-0x3af: clean. [ 18.108115] pcmcia_socket pcmcia_socket0: cs: IO port probe 0x3e0-0x4ff: clean. [ 18.108941] pcmcia_socket pcmcia_socket0: cs: IO port probe 0x820-0x8ff: clean. [ 18.109676] pcmcia_socket pcmcia_socket0: cs: IO port probe 0xc00-0xcf7: clean. [ 18.110356] pcmcia_socket pcmcia_socket0: cs: IO port probe 0xa00-0xaff: clean. [ 18.159286] fb0: inteldrmfb frame buffer device [ 18.159289] registered panic notifier [ 18.160218] input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/LNXVIDEO:01/input/input9 [ 18.160286] ACPI: Video Device [VID1] (multi-head: yes rom: no post: no) [ 18.160334] ACPI Warning for \_SB_.PCI0.VID2._DOD: Return Package has no elements (empty) (20090903/nspredef-433) [ 18.160432] input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/LNXVIDEO:02/input/input10 [ 18.160491] ACPI: Video Device [VID2] (multi-head: yes rom: no post: no) [ 18.160539] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 0 [ 18.162494] vga16fb: initializing [ 18.162497] vga16fb: mapped to 0xc00a0000 [ 18.162500] vga16fb: not registering due to another framebuffer present [ 18.176091] HDA Intel 0000:00:1b.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [ 18.176123] HDA Intel 0000:00:1b.0: setting latency timer to 64 [ 18.285752] input: HDA Digital PCBeep as /devices/pci0000:00/0000:00:1b.0/input/input11 [ 18.312497] input: HDA Intel Mic at Ext Left Jack as /devices/pci0000:00/0000:00:1b.0/sound/card0/input12 [ 18.312586] input: HDA Intel HP Out at Ext Left Jack as /devices/pci0000:00/0000:00:1b.0/sound/card0/input13 [ 18.328043] usbcore: registered new interface driver ndiswrapper [ 18.460909] Console: switching to colour frame buffer device 180x56 [ 18.598251] composite sync not supported

    Read the article

  • Dynamic character animation - Using the physics engine or not

    - by Lex Webb
    I'm planning on building a dynamic reactant animation engine for the characters in my 2D Game. I have already built templates for a skeleton based animation system using key frames and interpolation to specify a limbs position at any given moment in time. I am using Farseer physics (an extension of Box2D) in Monogame/XNA in C# My real question lies in how i go about tying this character animation into the physics engine. I have two options: Moving limbs using physics engine - applying a interpolated force to each limb (dynamic body) in order to attempt to get it to its position as donated by the skeleton animation. Moving limbs by simply changing the position of a fixed body - Updating the new position of each limb manually, attempting to take into account physics collisions. Then stepping the physics after the animation to allow for environment interaction. Each of these methods have their distinct advantages and disadvantages. Physics based movement Advantages: Possibly more natural/realistic movement Better interaction with game objects as force applying to objects colliding with characters would be calculated for me. No need to convert to dynamic bodies when reacting to projectiles/death/fighting. Disadvantages: Possible difficulty in calculating correct amount of force to move a limb a certain distance at a constant rate. Underlying character balance system would need to be created that would need to be robust enough to prevent characters falling over at the touch of a feather. Added code complexity and processing time for the above. Static Object movement Advantages: Easy to interpolate movement of limbs between game steps Moving limbs is as simple as applying a rotation to the skeleton bone. Greater control over limbs, wont need to worry about characters falling over as all animation would be pre-defined. Disadvantages: Possible unnatural movement (Depends entirely on my animation skills!) Bad physics collision reactions with physics engine (Dynamic bodies simply slide out of the way of static objects) Need to calculate collisions with physics objects and my limbs myself and apply directional forces to them. Hard to account for slopes/stairs/non standard planes when animating walking/running animations. Need to convert objects to dynamic when reacting to projectile/fighting/death physics objects. The Question! As you can see, i have thought about this extensively, i have also had Google into physics based animation and have found mostly dissertation papers! Which is filling me with sense that it may a lot more advanced than my mathematics skills. My question is mostly subjective based on my findings above/any experience you may have: Which of the above methods should i use when creating my game? I am willing to spend the time to get a physics solution working if you think it would be possible. In the end i want to provide the most satisfying experience for the gamer, as well as a robust and dynamic system i can use to animate pretty much anything i need.

    Read the article

  • Agile PLM 9.3 Service Pack 2 (SP2 or 9.3.0.2) is released along with AUT 1.6.2.0 and AutoVue 20 for

    - by Shane Goodwin
    Oracle released Agile PLM 9.3 SP2 on June 14 and the Agile installer for AutoVue 20 for Agile PLM on April 30. Also available are the new versions of AUT and Averify - 1.6.3 for both tools. 9.3 SP2 is a combined English and NLS release for use on any version of 9.3.0. SP2 contains many bug fixes and rolls up several Hot Fixes - please review the Readme for all the details. In addition, this release also addresses some scalability issues when working with very large Exports and Reports. When exporting very large BOMs, the export module will now release objects more efficiently to reduce the amount of memory consumed on the Application Server. Adminstrators can also control the maximum row limits for Users verses system processes, like ACS. Several out of the box BOM reports have also been changed to use a new row limit option. The combination of all these changes will provide more stability on the application server for customers managing very large datasets. 9.3 SP2 also adds support for Oracle Database 11gR2 for Windows, Oracle Internet Directory (OID) and Oracle Access Manager (OAM). Please note that currently the Variant Patch is not intended to be released for SP2. Customers running the Variant Patch should remain on 9.3.0.0 or 9.3.0.1. Back in April, we also released the AutoVue 20 for Agile PLM installer. AutoVue 20 has many new features which will help Agile PLM customers. Large multi-page Word documents and 2D CAD documents will open more quickly to the first page or first rendition. Memory usage is less when working with 3D Models. There are many new formats supported for MCAD, 2D Cad, and EDA. AutoVue 20 is immediately available for Windows and Linux platforms. The new software can be found in Edelivery or Metalink / Oracle Support: - AutoVue 20 for Agile PLM is on E-Delivery with part number B58963-01 - Oracle Agile PLM 9.3 Service Pack 2 (9.3.0.2) My Oracle Support Patch ID 9782736 - AVERIFY 1.6.3 My Oracle Support Patch ID 9791892 - AUT 1.6.3 My Oracle Support Patch ID 9791908 - Agile PLM 9.3 SP2 Documentation is available on the OTN Agile Documentation Page

    Read the article

  • Drinking Our Own Champagne: Fusion Accounting Hub at Oracle

    - by Di Seghposs
    A guest post by Corey West, Senior Vice President, Oracle's Corporate Controller and Chief Accounting Officer There's no better story to tell than one about Oracle using its own products with blowout success. Here's how this one goes. As you know, Oracle has increased its share of the software market through a number of high-profile acquisitions. Legally combining companies is a very complicated process -- it can take months to complete, especially for the acquisitions with offices in several countries, each with its own unique laws and regulations. It's a mission critical and time sensitive process to roll an acquired company's legacy systems (running vital operations, such as accounts receivable and general ledger (GL)) into the existing systems at Oracle. To date, we've run our primary financial ledgers in E-Business Suite R12 -- and we've successfully met the requirements of the business and closed the books on time every single quarter. But there's always room for improvement and that comes in the form of Fusion Applications. We are now live on Fusion Accounting Hub (FAH), which is the first critical step in moving to a full Fusion Financials instance. We started with FAH so that we could design a global chart of accounts. Eventually, every transaction in every country will originate from this global chart of accounts -- it becomes the structure for managing our business more uniformly. In conjunction, we're using Oracle Hyperion Data Relationship Management (DRM) to centralize and automate governance of our global chart of accounts and related hierarchies, which will help us lower our costs and greatly reduce risk. Each month, we have to consolidate data from our primary general ledgers. We have been able to simplify this process considerably using FAH. We can now submit our primary ledgers running in E-Business Suite (EBS) R12 directly to FAH, eliminating the need for more than 90 redundant consolidation ledgers. Also we can submit incrementally, so if we need to book an adjustment in a primary ledger after close, we can do so without re-opening it and re-submitting. As a result, we have earlier visibility to period-end actuals during the close. A goal of this implementation, and one that we successfully achieved, is that we are able to use FAH globally with no customization. This means we have the ability to fully deploy ledger sets at the consolidation level, plus we can use standard functionality for currency translation and mass allocations. We're able to use account monitoring and drill down functionality from the consolidation level all the way through to EBS primary ledgers and sub-ledgers, which allows someone to click through a transaction appearing at the consolidation level clear through to its original source, a significant productivity enhancement when doing research. We also see a significant improvement in reporting using Essbase cube and Hyperion Smart View. Specifically, "the addition of an Essbase cube on top of the GL gives us tremendous versatility to automate and speed our elimination process," says Claire Sebti, Senior Director of Corporate Accounting at Oracle. A highlight of this story is that FAH is running in a co-existence environment. Our plan is to move to Fusion Financials in steps, starting with FAH. Next, our Oracle Financial Services Software subsidiary will move to a full Fusion Financials instance. Then we'll replace our EBS instance with Fusion Financials. This approach allows us to plan in steps, learn as we go, and not overwhelm our teams. It also reduces the risk that comes with moving the entire instance at once. Maria Smith, Vice President of Global Controller Operations, is confident about how they've positioned themselves to uptake more Fusion functionality and is eager to "continue to drive additional efficiency and cost savings." In this story, the happy customers are Oracle controllers, financial analysts, accounting specialists, and our management team that get earlier access to more flexible reporting. "Fusion Accounting Hub simplifies our processes and gives us more transparency into account activity," raves Alex SanJuan, Senior Director, Record to Report Strategic Process Owner. Overall, the team has been very impressed with the usability and functionality of FAH and are pleased with the quantifiable improvements. Claire Sebti states, "Our WD5 close activities have been reduced by at least four hours of system processing time, just for the consolidation group." Fusion Accounting Hub is an inspiring beginning to our Fusion Financials implementation story. There's no doubt it's going to be an international bestseller! Corey West, Senior Vice President Oracle's Corporate Controller and Chief Accounting Officer

    Read the article

  • Brightness Crash and Fan issues in 12.04.1

    - by S.A. McIntosh
    I would just like to state beforehand that I am a total novice in using Ubuntu when it comes to the more complex issues. So I thought it would be best to finally come here and ask for help before being re-directed or closed out for a solution. I have already looked high and low on this board for one but nothing came up for my particular case, so I might as well take a shot asking for the first time here. This is what I have at the moment: -Dell Insprion 1764 w/ 64-bit Intel i5 Core -Dual Boot: Windows 7/Ubuntu 12.04.1 32-bit (from 12.04 install) -Unity shell -Linux kernel: 3.2.0-32 generic-pae ...and this is my fglrxinfo: OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: ATI Mobility Radeon HD 5000 Series OpenGL version string: 4.2.11627 Compatibility Profile Context The one issue I have with using Ubuntu is brightness. With the driver in every time I use the slider in the brightness and lock settings or use the keyboard function, it freezes, goes black and comes up with a scrambled colors page like this in the video. So I have looked all over this board and the web for answers looking for a solution that might have an answer. This is what I have done so far to fix this: -First Solution: Looking around, I found this small fix using terminal: sudo gedit /etc/rc.local followed by adding this into "rc.local" echo # > /sys/class/backlight/acpi_video0/brightness This works rarely with the graphics driver still in and I often get lucky say during restart but reboot would only snap back the brightness at max. -Second Solution Simply remove the graphics driver while leaving the solution of first behind. This solves the issue but results in having the monitor flicker and flash at startup which in itself is not a problem to me but maybe not so good for monitor health. Also it causes the fan to speed up throughout the session and render any program that needs the driver useless. -Third Solution This is the most obvious. Just simply use the brightness on AMD Catalyst Control Center software that came with the driver, and I can say that it's form of brightness is HORRIBLE compared to the actual settings. Which leads up to where I am now, back to the driver to stop the fan speed-up and seems that the only solution to the brightness crash is to use the keyboard-controlled brightness at the login screen NOT the desktop if I want the issued effect but will just snap at max bright again if I restart. Fan speed problem is dealt with but now run the risk of crashing my computer if I so much touch the brightness settings. Speaking of which I found this on launchpad and it seems that the issue has been going far since June of 2012. Any help, redirect link or reference would be greatly appreciated. Thank you.

    Read the article

  • Attunity Oracle CDC Solution for SSIS - Beta

    We in no way work for Attunity but we were asked to test drive a beta version of their Oracle CDC solution for SSIS.  Everybody should know that moving more data than you need to takes too much time and uses resources that may better be employed doing something else.  Change data Capture is a technology that is designed to help you identify only the data that has had something done to it and you can therefore move only what is needed.  Microsoft have implemented this exact functionality into SQL server 2008 and I really like it there.  Attunity though are doing it on Oracle. DISCLAIMER: This is a BETA release and some of the parts are a bit ugly/difficult to work with.  The idea though is definitely right and the product once working does exactly what it says on the tin.  They have always been helpful to me when I have had a problem with the product and if that continues then beta testing pain should be eased somewhat. In due course I am going to be doing some videos around me using the product.  If you use Oracle and SSIS then give it a go. Here is their product description.   Attunity is a Microsoft SQL Server technology partner and the creator of the Microsoft Connectors for Oracle and Teradata, currently available in SQL Server 2008 Enterprise Edition. Attunity released a beta version of the Attunity Oracle-CDC for SSIS, a product that integrates continually changing Oracle data into SSIS, efficiently and in real-time. Attunity designed the product and integrated it into SSIS to create the simple creation of change data capture (CDC) solutions, accelerate implementation time, and reduce resources and costs. They also utilize log-based CDC so the solution has minimal impact on the Oracle source system. You can use the product to implement enterprise-class data replication, synchronization, and real-time business intelligence (BI) and data warehousing projects, quickly and efficiently, leveraging their existing SQL Server investments and resource skills. Attunity architected the product specifically for the Microsoft SSIS developer community and the product is available for both SQL Server 2005 and SQL Server 2008. It offers the following key capabilities: · Log-based, non-intrusive Oracle CDC · Full integration into SSIS and the Business Intelligence Developer Studio · Automatic generation of SSIS packages for CDC as well as full-loads of Oracle data · Filtering of Oracle tables and columns at the source · Monitoring and control of CDC processing Click to learn more and download the beta.

    Read the article

  • Looking for tips on managing complexity with SCM repositories

    - by Philip Regan
    I am a solo developer in my department and I have a lot of individual projects, all created and managed by me. I started using SVN at ProjectLocker via Versions on the Mac a couple years ago when the variety of projects started getting unwieldy. Scenario 1: Now I have a process that is of reasonable complexity it can be broken up into multiple smaller applications and they all share files. In one phase, there is a single shared file—a constants file—that is shared between a Cocoa app and an iPhone app framework. In the second phase, the iPhone app framework will be used to create individual apps of the same ilk—controller classes and what not will all be the same—but with different content in each. The problem that I am running across is that the file in the first phase is in one repository with the application that started it, and the app framework is in a second, separate repository. Scenario 2: I have another application framework that partially relies on code from an open source project. This is all internal, non-commerical work, but again, the application framework is going to be used to create a variety of unique products and processes. So, now I have an internally managed repository and an externally managed one out of my control. I make little changes to the open source code to meet the needs of my framework when there is an update I download, but I never commit back into the external repository (though, now that I think about it, I don't think I'm committing it to mine either. Oops). The Problem I have all of this set up on my production Mac quite nicely, but duplicating and subsequently maintaining that environment on my laptop has been challenging. For Scenario 1, I've thought of merging these two projects together into the same repository because they are, for all intents and purposes inextricably linked. But, Scenario 2, I think I'm stuck just managing files as best I can. The Question I'm wondering if anyone has any tips on how to manage either of these situations, as well as other complex SCM scenarios when it comes to linking various files from various repositories together. My familiarity with SVN only comes from my work with Versions. It's been great, but I'm a little out of my depth here.

    Read the article

  • ArchBeat Link-o-Rama for 2012-04-11

    - by Bob Rhubart
    Oracle Technology Network Developer Day: MySQL - New York www.oracle.com Wednesday, May 02, 2012 8:00 AM – 4:30 PM Grand Hyatt New York 109 East 42nd Street, Grand Central Terminal New York, NY 10017 OTN Architect Day - Reston, VA - May 16 www.oracle.com The live one-day event in Reston, VA brings together architects from a broad range of disciplines and domains to share insights and expertise in the use of Oracle technologies to meet the challenges today’s solution architects regularly face. Registration is free, but seating is limited. InfoQ: Seven Secrets Every Architect Should Know www.infoq.com Frank Buschmann’s secrets: User Tasks-based Design, Be Minimalist, Ensure Visibility of Domain Concepts, Use Uncertainty as a Driver, Design Between Things, Check Assumptions, Eat Your Own Dog Food. Roadmaps for the IT shop’s evolution | Andy Mulholland www.capgemini.com Andy Mulholland discusses "the challenge of new technology and the disruptive change it brings, together with the needs to understand and plan, or even try to gain control of end-users implementations." Drive Online Engagement with Intuitive Portals and Websites | Kellsey Ruppel blogs.oracle.com "The web presence must be able to scale to support the delivery of personalized and targeted content to thousands of site visitors without sacrificing performance," says WebCenter blogger Kellsey Ruppel. "And integration between systems becomes more important as well, as organizations strive to obtain one view of the customer culled from WCM data, CRM data and more." New Exadata Customer Cases | Javier Puerta blogs.oracle.com Javier Puerta shares links to four new customer use cases featuring details on the solutions implemented at each of these sizable companies. Invoicing: It's time to catch up! | Jesper Mol www.nl.capgemini.com Capgemini's Jesper Mol diagrams an e-invoicing solution that includes Oracle Service Bus. Using SAP Adapter with OSB 11g (PS3) | Shub Lahiri blogs.oracle.com Shub Lahiri shares a brief overview outlining the steps required to build such a simple project with Oracle Service Bus 11g and SAP Adapter for the PS3 release. Northeast Ohio Oracle Users Group 2 Day Seminar - May 14-15 - Cleveland, OH www.neooug.org More than 20 sessions over 4 tracks, featuring 18 speakers, including Oracle ACE Director Cary Millsap, Oracle ACE Director Rich Niemiec, and Oracle ACE Stewart Brand. Register before April 15 and save. Thought for the Day "Today, most software exists, not to solve a problem, but to interface with other software." — I. O. Angell

    Read the article

  • ArchBeat Top 10 for November 18-24, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook page for the week of November 18-24, 2012. One-Stop Shop for over 200 On-Demand Oracle Webcasts Webcasts can be a great way to get information about Oracle products without having to go cross-eyed reading yet another document off your computer screen. Oracle's new Webcast Center offers selectable filtering to make it easy to get to the information you want. Yes, you have to register to gain access, but that process is quick, and with over 200 webcasts to choose from you know you'll find useful content. Oracle SOA Suite 11g PS 5 introduces BPEL with conditional correlation for aggregation scenarios | Lucas Jellema An extensive, detailed technical post from Oracle ACE Director Lucas Jellema. Oracle Utilities Application Framework V4.2.0.0.0 Released | Anthony Shorten Principal Product Manager Anthony Shorten shares an overview of the changes implemented in the new release. Fault Handling and Prevention - Part 1 | Guido Schmutz and Ronald van Luttikhuizen In this technical article, part one of a four part series, Oracle ACE Directors Guido Schmutz and Ronald van Luttikhuizen guide you through an introduction to fault handling in a service-oriented environment using Oracle SOA Suite and Oracle Service Bus. Oracle BPM Process Accelerators and process excellence | Andrew Richards "Process Accelerators are ready-to-deploy solutions based on best practices to simplify process management requirements," says Capgemini's Andrew Richards. "They are considered to be 'product grade,' meaning they have been designed; engineered, documented and tested by Oracle themselves to a level that they can be deployed as-is for a solution to a problem or extended as appropriate for a particular scenario." Videos: Getting Started with Java Embedded | The Java Source Interested in Java Embedded? You'll want to check out these videos provided Tori Weildt, including interviews with Oracle's James Allen and Kevin Smith, recorded at ARM TechCon. JPA SQL and Fetching tuning ( EclipseLink ) | Edwin Biemond Oracle ACE Edwin Biemond's post illustrates how to "use the department and employee entity of the HR Oracle demo schema to explain the JPA options you have to control the SQL statements and the JPA relation Fetching." Devoxx 2012 Trip Report - clouds and sunshine | Markus Eisele Oracle ACE Director Markus Eisele shares an extensive and entertaining account of his experience at Devoxx 2012. Towards Ultra-Reusability for ADF - Adaptive Bindings | Duncan Mills "The task flow mechanism embodies one of the key value propositions of the ADF Framework," says Duncan Mills. "However, what if we could do more? How could we make task flows even more re-usable than they are today?" As you might expect, Duncan has answers for those questions. Java Specification Requests in Numbers | Markus Eisele Oracle ACE Director Markus Eisele shares some interesting data culled from the Java Community Process site. Thought for the Day "You can't have great software without a great team, and most software teams behave like dysfunctional families." — Jim McCarthy Source: SoftwareQuotes.com

    Read the article

  • Are there deprecated practices for multithread and multiprocessor programming that I should no longer use?

    - by DeveloperDon
    In the early days of FORTRAN and BASIC, essentially all programs were written with GOTO statements. The result was spaghetti code and the solution was structured programming. Similarly, pointers can have difficult to control characteristics in our programs. C++ started with plenty of pointers, but use of references are recommended. Libraries like STL can reduce some of our dependency. There are also idioms to create smart pointers that have better characteristics, and some version of C++ permit references and managed code. Programming practices like inheritance and polymorphism use a lot of pointers behind the scenes (just as for, while, do structured programming generates code filled with branch instructions). Languages like Java eliminate pointers and use garbage collection to manage dynamically allocated data instead of depending on programmers to match all their new and delete statements. In my reading, I have seen examples of multi-process and multi-thread programming that don't seem to use semaphores. Do they use the same thing with different names or do they have new ways of structuring protection of resources from concurrent use? For example, a specific example of a system for multithread programming with multicore processors is OpenMP. It represents a critical region as follows, without the use of semaphores, which seem not to be included in the environment. th_id = omp_get_thread_num(); #pragma omp critical { cout << "Hello World from thread " << th_id << '\n'; } This example is an excerpt from: http://en.wikipedia.org/wiki/OpenMP Alternatively, similar protection of threads from each other using semaphores with functions wait() and signal() might look like this: wait(sem); th_id = get_thread_num(); cout << "Hello World from thread " << th_id << '\n'; signal(sem); In this example, things are pretty simple, and just a simple review is enough to show the wait() and signal() calls are matched and even with a lot of concurrency, thread safety is provided. But other algorithms are more complicated and use multiple semaphores (both binary and counting) spread across multiple functions with complex conditions that can be called by many threads. The consequences of creating deadlock or failing to make things thread safe can be hard to manage. Do these systems like OpenMP eliminate the problems with semaphores? Do they move the problem somewhere else? How do I transform my favorite semaphore using algorithm to not use semaphores anymore?

    Read the article

  • Canonicalization issue regarding academic URL vs. blog URL

    - by user5395
    I'm sorry if what I am about to write is long-winded. I only wish to be clear. I am an academic in the scientific community. I maintain a web site for my research, teaching, and other professional activities. Until recently, the content for this site was hosted in a directory on my university department's own server. The address is of the typical form (universityname).edu/~(myusername) I decided that I wanted to use WordPress in order to host and manage my page. So I set up a WordPress.com blog and then replaced the index.html file in (universityname).edu/~(myusername) with a new one consisting of a single frame, containing the WordPress.com blog. Now when a user visits (universityname).edu/~(myusername), he or she sees the blog instead. This has been pretty nice because, even when the user clicks on links between pages or posts in the blog, the only thing showing up in the address bar of the browser is www.(universityname).edu/~(myusername), because the blog is constrained to a frame. However, the effect of this change on the search side of things has not been so kind to me. Before, when someone searched for my name in Google, the first result was always (universityname).edu/~(myusername). This is the most desirable outcome, for professional reasons. (Having my academic URL come up first suggests that I am an accredited professional, and not just some crank with a blog!) But now, Google seems to have canonicalized my web presence under the blog's WordPress.com address. It has completely forgotten about my academic URL and considers the WordPress.com address to be the best address representing me on the web. Unfortunately, WordPress.com doesn't support the canonical tag, so I can't tell the blog to advertise itself as my academic URL in the header. (It doesn't seem to help at all that I have used the WordPress.com dashboard to turn on no-indexing of the blog.) One obvious solution would be to use the departmental server to host my content again, and use a local installation of the WordPress platform. For reasons beyond my control, the platform will not be deployed on the departmental server at this time. Another solution would be to use shared hosting with WordPress.org support, because the WordPress.org platform does support the canonical tag (albeit via a plug-in). But this seems to usually require purchasing a domain name and other fees, and there is no guarantee that Google will listen to the canonical tag (it might use whatever domain name I end up with instead). Is there a way I can more cleverly integrate the WordPress.com blog into a page hosted on my department's server? Is there some PHP code I can write to retrieve the blog's contents in a way that Google won't treat as a link / "perceive" the blog? Please note: I am a PHP novice at best. I just feel there should be a simpler solution to all this, within the constraints of what I have described above. Thanks!

    Read the article

  • Mass Metadata Updates with Folders

    - by Kyle Hatlestad
    With the release of WebCenter Content PS5, a new folder architecture called 'Framework Folders' was introduced.  This is meant to replace the folder architecture of 'Folders_g'.  While the concepts of a folder structure and access to those folders through Desktop Integration Suite remain the same, the underlying architecture of the component has been completely rewritten.  One of the main goals of the new folders is to scale better at large volumes and remove the limitations of 1000 content items or sub-folders within a folder.  Along with the new architecture, it has a new look and a few additional features have been added.  One of those features are Query Folders.  These are folders that are populated simply by a query rather then literally putting items within the folders.  This is something that the Library has provided, but it always took an administrator to define them through the Web Layout Editor.  Now users can quickly define query folders anywhere within the standard folder hierarchy.   Within this new Framework Folders is the very handy ability to do metadata updates.  It's similar to the Propagate feature in Folders_g, but there are some key differences that make this very flexible and much more powerful. It's used within regular folders and Query Folders.  So the content you're updating doesn't all have to be in the same folder...or a folder at all.   The user decides what metadata to propagate.  In Folders_g, the system administrator controls which fields will be propagated using a single administration page.  In Framework Folders, the user decides at that time which fields they want to update. You set the value you want on the propagation screen.  In Folders_g, it used the metadata defined on the parent folder to propagate.  With Framework Folders, you supply the new metadata value when you select the fields you want to update.  It does not have to be defined on the parent folder. Because of these differences, I think the new propagate method is much more useful.  Instead of always having to rely on Archiver or a custom spreadsheet, you can quickly do mass metadata updates right within folders.   Here are the basic steps to perform propagation. First create a folder for the propagation.  You can use a regular folder, but a Query Folder will work as well. Go into the folder to get the results.   In the Edit menu, select 'Propagate'. Select the check-box next to the field to update and enter the new value  Click the Propagate button. Once complete, a dialog will appear showing it is complete What's also nice is that the process happens asynchronously in the background which means you can browse to other pages and do other things while it is still working.  You aren't stuck on the page waiting for it to complete.  In addition, you can add a configuration flag to the server to turn on a status indicator icon.  Set 'FldEnableInProcessIndicator=1' and it will show a working icon as its doing the propagation. There is a caveat when using the propagation on a Query Folder.   While a propagation on a regular folder will update all of the items within that folder, a Query Folder propagation will only update the first 50 items.  So you may need to run it multiple times depending on the size...and have the query exclude the items as they get updated. One extra note...Framework Folders is offered as the default folder architecture in the PS5 release of WebCenter Content.  But if you are using WebCenter Content integrated with another product that makes use of folders (WebCenter Portal/Spaces, Fusion Applications, Primavera, etc), you'll need to continue using Folders_g until they are updated to use the new folders.

    Read the article

  • Introducing the First Global Web Experience Management Content Management System

    - by kellsey.ruppel
    By Calvin Scharffs, VP of Marketing and Product Development, Lingotek Globalizing online content is more important than ever. The total spending power of online consumers around the world is nearly $50 trillion, a recent Common Sense Advisory report found. Three years ago, enterprises would have to translate content into 37 language to reach 98 percent of Internet users. This year, it takes 48 languages to reach the same amount of users.  For companies seeking to increase global market share, “translate frequently and fast” is the name of the game. Today’s content is dynamic and ever-changing, covering the gamut from social media sites to company forums to press releases. With high-quality translation and localization, enterprises can tailor content to consumers around the world.  Speed and Efficiency in Translation When it comes to the “frequently and fast” part of the equation, enterprises run into problems. Professional service providers provide translated content in files, which company workers then have to manually insert into their CMS. When companies update or edit source documents, they have to hunt down all the translated content and change each document individually.  Lingotek and Oracle have solved the problem by making the Lingotek Collaborative Translation Platform fully integrated and interoperable with Oracle WebCenter Sites Web Experience Management. Lingotek combines best-in-class machine translation solutions, real-time community/crowd translation and professional translation to enable companies to publish globalized content in an efficient and cost-effective manner. WebCenter Sites Web Experience Management simplifies the creation and management of different types of content across multiple channels, including social media.  Globalization Without Interrupting the Workflow The combination of the Lingotek platform with WebCenter Sites ensures that process of authoring, publishing, targeting, optimizing and personalizing global Web content is automated, saving companies the time and effort of manually entering content. Users can seamlessly integrate translation into their WebCenter Sites workflows, optimizing their translation and localization across web, social and mobile channels in multiple languages. The original structure and formatting of all translated content is maintained, saving workers the time and effort involved with inserting the text translation and reformatting.  In addition, Lingotek’s continuous publication model addresses the dynamic nature of content, automatically updating the status of translated documents within the WebCenter Sites Workflow whenever users edit or update source documents. This enables users to sync translations in real time. The translation, localization, updating and publishing of Web Experience Management content happens in a single, uninterrupted workflow.  The net result of Lingotek Inside for Oracle WebCenter Sites Web Experience Management is a system that more than meets the need for frequent and fast global translation. Workflows are accelerated. The globalization of content becomes faster and more streamlined. Enterprises save time, cost and effort in translation project management, and can address the needs of each of their global markets in a timely and cost-effective manner.  About Lingotek Lingotek is an Oracle Gold Partner and is going to be one of the first Oracle Validated Integrator (OVI) partners with WebCenter Sites. Lingotek is also an OVI partner with Oracle WebCenter Content.  Watch a video about how Lingotek Inside for Oracle WebCenter Sites works! Oracle WebCenter will be hosting a webinar, “Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter," tomorrow, September 13th. To attend the webinar, please register now! For more information about Lingotek for Oracle WebCenter, please visit http://www.lingotek.com/oracle.

    Read the article

  • Apache2 Syntex, cant acces 000-default

    - by enrique2334
    I have been using Apache2 and webmin with my raspberry pi. after a restart and reinstalations apache wont start. > sudo /etc/init.d/apache2 restart apache2: Syntax error on line 268 of /etc/apache2/apache2.conf: Could not open configuration file /etc/apache2/sites-enabled/000-default: No such file or directory Action 'configtest' failed. The Apache error log may have more information. failed! The file 000-default is there and unopenable permisions to root-root My apache2.conf file looks like this (bottom half) # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog ${APACHE_LOG_DIR}/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel debug # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include list of ports to listen on and which to use for name based vhosts Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # Include of directories ignores editors' and dpkg's backup files, # see the comments above for details. # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/ <VirtualHost *:80> DocumentRoot /var/www <Directory /var/www> allow from all Options +Indexes </Directory> ServerName IMASERVER </VirtualHost> does anyone know what the cause of this?

    Read the article

  • LightSwitch Tutorial - Adding Image to a LightSwitch Screen

    - by ChrisD
    Last week, I have discussed how to control Screen Layouts in LightSwitch. Now, I will talk about how to add an image to the LightSwitch screen. In this demo, I will try to upload the image to the screen and will save the image into the database. The first step we need to do is start the VS 2010, create LightSwitch Desktop application with the name “AddingImageIntoScreenInLSBeta2” as shown in the following figure. The second steps, create a table as shown in the screen by selecting the "create a table" option in the start up screen. Then, we need to add a New Data Screen to our demo application. See the following figure which is the default screen layouts for the screen we have created. So we have to change the layout of this screen so that the uploading and using the image in the screen can be easily explained. Before adding the Model Window we have to prepare the layout. So delete the Highlighted fields as shown in the above figure. After preparing the layout to add the image, just add a new Group to the Person Property Rows Layout. To add a new group, [No: 1] – Select the Rows layout, it will shows you the Add button. [No: 2] – Click the Add button to select the new group. [No: 3] – Select the New Group. After adding the new group change the Layout type to Columns Layout. Here, -          Change the rows layout to columns layouts and give the display name as Uploading Image Example. -          Click on the add button to add the Photo field under the column layout. Add a new group under the Column layout group. Follow the [No: #] to create a new group under the columns layout group. After adding a new group of rows layout add the fields to the newly created group. [No: 1] – Select the Rows Layout group and change the display name as Details. [No: 2] – Click on Add button to select the appropriate fields to add to the group. [No: 3] – Add the fields to the group The above snippet shows the complete layout tree for our screen. Now the screen for uploading the image is ready. Just press the Play button. And see the result.

    Read the article

  • Java Spotlight Episode 104: Devoxx 4 Kids

    - by Roger Brinkley
    Stephan Jannsen talks about the new Devoxx 4 Kids that he launched this last weekend in Belgium. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News WebSocket JSR Early Draft (JSR 356) JAX-RS 2 Public Draft (JSR 339) JMS2, JAX-RS 2, WebSocket, JSON integrated in GlassFish 4 Promoted Builds Java EE 7 Revised Scope - Q2 2013 JavaOne Content Available for Free Please try Oracle's Java Uninstall Applet OpenJDK Community and Project Scorecard Experimental new utility to detect issues in javadoc comments PermGen Elimination project is promoting JDK bug migration milestone: JIRA now the system of record Project Jigsaw: On the next train New OpenJDK Projects: ThreeTen & Project Sumatra Events Oct 15-17, JAX London, London, United Kingdom Oct 20, Devoxx 4 Kids Français, Brussels, Belgium Oct 22-23, Freescale Technology Forum - Japan, Tokyo, Japan Oct 23-25, EclipseCon Europe, Ludwigsburg, Germany Oct 30-Nov 1, Arm TechCon, Santa Clara, United States of America Oct 31, JFall, Hart van Holland, Netherlands Nov 2-3, JMaghreb, Rabat, Morocco Nov 5-9, Øredev Developer Conference, Malmö, Sweden Nov 13-17, Devoxx, Antwerp, Belgium Nov 20-22, DOAG 2012, Nuremberg, Germany Dec 3-5, jDays, Göteborg, Sweden Dec 4-6, JavaOne Latin America, Sao Paolo, Brazil Feature InterviewStephan Janssen is a serial entrepreneur that has founded several successful organizations such as the Belgian Java User Group (BeJUG) in 1996, JCS Int. in 1998, JavaPolis in 2002 and now Parleys.com in 2006. He has been using Java since its early releases in 1995 with experience of developing and implementing real world Java solutions in the finance and manufacturing industries. Today Stephan is the CTO of the Java Competence Center at RealDolmen. He was selected by BEA Systems as the first European (independent) BEA Technical Director. He has also been recognized by the Server Side as one of the 54 Who is Who in Enterprise Java 2004. Sun has recognized in 2005 his efforts for the Java Community and has engaged him in the Java Champion project. He has spoken at numerous Java and JUG conferences around the world.Devoxx 4 KidsNew to Java Programming Center -- Young Developers What’s Cool "Here is the draft proposal to add a public Base64 utility class for JDK8." Default methods for jdk8: request for code review Raspberry Pi Model B now ships with 512MB of RAM JDuchess roadshow on the Island of Java. Nety and Mila from Meruvian.First week roadshowSecond week roadshowThird week part 1

    Read the article

  • Diagram to show code responsibility

    - by Mike Samuel
    Does anyone know how to visually diagram the ways in which the flow of control in code passes between code produced by different groups and how that affects the amount of code that needs to be carefully written/reviewed/tested for system properties to hold? What I am trying to help people visualize are arguments of the form: For property P to hold, nd developers have to write application code, Ca, without certain kinds of errors, and nm maintainers have to make sure that the code continues to not have these kinds of errors over the project lifetime. We could reduce the error rate by educating nd developers and nm maintainers. For us to be confident that the property holds, ns specialists still need to test or check |Ca| lines of code and continue to test/check the changes by nm maintainers. Alternatively, we could be confident that P holds if all code paths that could violate P went through tool code, Ct, written by our specialists. In our case, test suites alone cannot give confidence that P holdsnd » nsnm ns|Ca| » |Ct| so writing and maintaining Ct is economical, frees up our developers to worry about other things, and reduces the ongoing education commitment by our specialists. or those conditions do not hold, so focusing on education and testing is preferable. Example 1 As a concrete example, suppose we want to ensure that our web-service only produces valid JSON output. Our web-service provides several query and mutation operators that can be composed in interesting ways. We could try to educate everyone who maintains those operations about the JSON syntax, the importance of conformance, and libraries available so that when they write to an output buffer, every possible sequence of appends results in syntactically valid JSON. Alternatively, we don't expose an output stream handle to application code, and instead expose a JSON sink so that every code path that writes a response is channeled through a JSON sink that is written and maintained by a specialist who knows JSON syntax and can use well-written libraries to produce only valid output. Example 2 We need to make sure that a service that receives a URL from an untrusted source and tries to fetch its content does not end up revealing sensitive files from the file-system, like file:///etc/passwd. If there is a single standard way that any developer familiar with the application language's libraries would use to fetch URLs, which has file-system access turned off by default, then simply educating developers about the standard mechanism, and testing that file probing fails for some inputs, will probably be sufficient.

    Read the article

  • How to build a 4x game?

    - by Marco
    I'm trying to study how succefully implement a 4x game. Area of interest: 1) map data: how to store stellars systems (graphs?), how to generate them and so on.. 2) multiplayer: how to organize code in a non graphical server and a client to display it 3) command system: what are patters to catch user and ai decisions and handle them, adding at first "explore" and "colonize" then "combat", "research", "spy" and so on (commands can affect ships, planets, research, etc..) 4) ai system: ai can use commands to expand, upgrade planets and ship I know is a big questions, so help is appreciated :D 1) Map data Best choice is have a graph to model a galaxy. A node is a stellar system and every system have a list of planets. Ship cannot travel outside of predefined paths, like in Ascendancy: http://www.abandonia.com/files/games/221/Ascendancy_2.png Every connection between two stellar systems have a cost, in turns. Generate a galaxy is only a matter of: - dimension: number of stellar systems, - variety: randomize number of planets and types (desertic, earth, etc..), - positions of each stellar system on game space - connections: assure that exist a path between every node, so graph is "connected" (not sure if this a matematically correct term) 2) Multiplayer Game is organized in turns: player 1, player 2, ai1, ai2. Server take care of all data and clients just diplay it and collect data change. Because is a turn game, latency is not a problem :D 3) Command system I would like to design a hierarchy of commands to take care of this aspect: abstract Genericcommand (target) ExploreCommand (Ship) extends genericcommand colonizeCommand (Ship) buildcommand(planet, object) and so on. In my head all this commands are stored in a queue for every planets, ships or reasearch center or spy, and each turn a command is sent to a server to apply command and change data state 4) ai system I don't have any idea about this. Is a big topic and what I want is a simple ai. Something like "expand and fight against everyone". I think about a behaviour tree to control ai moves, so I can develop an ai that try to build ships to expand and then colonize planets, upgrade them throught science and combat enemies. Could be done with a finite state machine too ? any ideas, resources, article are welcome!

    Read the article

< Previous Page | 888 889 890 891 892 893 894 895 896 897 898 899  | Next Page >