Search Results

Search found 10984 results on 440 pages for 'pre loading'.

Page 311/440 | < Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >

  • Common SOA Problems by C2B2

    - by JuergenKress
    SOA stands for Service Oriented Architecture and has only really come together as a concrete approach in the last 15 years or so, although the concepts involved have been around for longer. Oracle SOA Suite is based around the Service Component Architecture (SCA) devised by the Open SOA collaboration of companies including Oracle and IBM. SCA, as used in SOA suite, is designed as a way to crystallise the concepts of SOA into a standard which ensures that SOA principles like the separation of application and business logic are maintained. Orchestration or Integration? A common thing to see with many people who are beginning to either build a new SOA based infrastructure, or move an old system to be service oriented, is confusion in the purpose of SOA technologies like BPEL and enterprise service buses. For a lot of problems, orchestration tools like BPEL or integration tools like an ESB will both do the job and achieve the right objectives; however it’s important to remember that, although a hammer can be used to drive a screw into wood, that doesn’t mean it’s the best way to do it. Service Integration is the act of connecting components together at a low level, which usually results in a single external endpoint for you to expose to your customers or other teams within your organisation – a simple product ordering system, for example, might integrate a stock checking service and a payment processing service. Process Orchestration, however, is generally a higher level approach whereby the (often externally exposed) service endpoints are brought together to track an end-to-end business process. This might include the earlier example of a product ordering service and couple it with a business rules service and human task to handle edge-cases. A good (but not exhaustive) rule-of-thumb is that integrations performed by an ESB will usually be real-time, whereas process orchestration in a SOA composite might comprise processes which take a certain amount of time to complete, or have to wait pending manual intervention. BPEL vs BPMN For some, with pre-existing SOA or business process projects, this decision is effectively already made. For those embarking on new projects it’s certainly an important consideration for those using Oracle SOA software since, due to the components included in SOA Suite and BPM Suite, the choice of which to buy is determined by what they offer. Oracle SOA suite has no BPMN engine, whereas BPM suite has both a BPMN and a BPEL engine. SOA suite has the ESB component “Mediator”, whereas BPM suite has none. Decisions must be made, therefore, on whether just one or both process modelling languages are to be used. The wrong decision could be costly further down the line. Design for performance: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,SOA best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • WebLogic Partner Community Newsletter May 2014

    - by JuergenKress
    Dear WebLogic Partner Community member, Registration for the Fusion Middleware Summer Camps 2014 is open – Register asap for one of our bootcamps August 4th – 8th 2014 in Lisbon. Please read details and pre-requisitions careful before you register. We expect that like in the past, the conference will be booked out soon! Thanks to you our WebLogic Specialized Partners Oracle is #1 for Worldwide Market-Share Total Software Revenue in the Application Platform Market Segment for 2013. Want to know why, get the new recipes for Oracle WebLogic 12.1.2. Looking for the right server to run WebLogic – try WebLogic on Oracle Database Appliance 2.9. Want to install WebLogic - Play around with WebLogic Maven Plug-In. Thanks for sharing all the additional WebLogic articles within the community: How to use NodeManager to control WebLogic Servers & Retrieving WebLogic Server Name and Port in ADF Application & Glassfish to WebLogic Migration & Advanced GPIO & Building Robots with Java Embedded & Quick & Dirty How-to Guide: Install GlassFish 4 on Raspberry Pi & New Release: Java Micro Edition (ME) 8. In our Development tool section Frank published Development - Performance and Tuning - Overview in the latest ADF Architecture TV channel. Many of our clients run forms applications, make sure you run it on WebLogic. Thanks for sharing all the additional development tool articles within the community: Using Oracle WebLogic 12c with NetBeans IDE & Consuming SOAP Service & Check Box Support in ADF Query & New release of the ADF EMG Audit Rules & Working with the Array Data Type in a Table & ADF client-side architecture - Select All & Book Review: NetBeans Platform for Beginners See you in Lisbon! To read the complete newsletter please visit http://tinyurl.com/WebLogicNewsMay2014 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Adjusting the Score on Oracle Text search results

    - by Kyle Hatlestad
    When you sort the results of a search by Score using OracleTextSearch as the search engine in WebCenter Content, the results coming back are based on the relevancy on the document.  In theory, the more relevant the search term is to the document, the higher ranked Score it should receive.  But in practice, the relevancy score can seem somewhat of a mystery.  It's not entirely clear how it ranks the importance of some documents over others based on the search term.  And often times, once a word appears a certain number of times within a document, the Score simply maxes out at 100 and the top results can be difficult to discern from one another.  Take for example the search for 'vacation' on this set of documents: Out of 7 results, 6 of them have a Score of '100' which means they are basically ranked the same.  This doesn't make the sort by Score very meaningful.   Besides sorting by relevance, you can also tell Oracle Text to sort by occurrence.  In that case, it is a much more predictable result in how they would be ranked. And for many cases provide a more meaningful sorting of results then relevance. To change this takes a small component change to the SearchOperatorMap resource.  By default, the query used for full-text searching looks like: <td>(ORACLETEXTSEARCH)fullText</td> <td>DEFINESCORE((%V), RELEVANCE * .1)</td> <td>text</td> Overriding this resource and changing it to: <td>(ORACLETEXTSEARCH)fullText</td> <td>DEFINESCORE((%V), OCCURRENCE * .01)</td>  <td>text</td> will force it to now use occurrence (note the change in scale to .01 as well).  So running the same search and sort options as the example above, the results come out quite a bit differently: In this case, there is a clear understanding of how the items rank.   And generally, if the search term appears 3 times more in one document then another, it's got a better chance of being a document I'm interested in.  You may or may not feel the relevance ranking is better then the search term occurrence, but this provides the opportunity to try an alternate method that might work better for your results.  A pre-built component is available for download here. There is one caveat in using this method.  The occurrence ranking also maxes out at 100, so if a search term is in the document more then that, the Score result will stay at 100.

    Read the article

  • Deferred Shading - Toolkit

    - by AliveDevil
    I recently managed to get some lights rendered in a scene by using a buffer and a for-loop. The problem with this method is the performance drop if more lights are used. I tried to convert Deferred Rendering in XNA4.0 | ROY-T.NL but it is not working, because I am not using any models. I know I have to render color, normals and lights seperate but I don't know how I could get it working. For understanding my structure better I'm using a world-class which holds some chunks. These chunks are loading all vertices from their items. These items have a property which returns the vertices. The item is returning VertexPositionNormalTexture[]. The chunk loads these Vertices and combines them to one large array of VertexPositionNormalTexture via someList.AsParallel().SelectMany(m => m).ToArray()). m is a VertexPositionNormalTexture. someList is List<VertexPositionNormalTexture>. I got my own shader to draw these vertices how I want them to be drawn. The first thing I would try is setting up two RenderTarget2D for rendering the color and normal part. With two different shaders. Than I would have to render the lights and there's the problem: I don't know how. I set up a structure to simplify working with lights but it didn't really help. public struct Light { public Vector3 Position; public Color4 Color; public float Range; public float Intensity; public Light( Vector3 position, Color color, float range, float intensity ) : this() { this.Position = position; this.Color = color; this.Range = range; this.Intensity = intensity; } public float[] Definition { get { return new[] { Position.X, Position.Y, Position.Z, Color.Red, Color.Green, Color.Blue, Intensity, Range }; } } } The next part is equally different because I don't know how to combine the colorMap, normalMap and textureMap to one finalMap. Some information to the system: I'm using SharpDX (Nightly from some months ago) and the SharpDX.Toolkit (I don't want to mess up with Direct3DDevice and similar things). Can someone help me with this problem? If things are missing or I provided insufficient information tell me, I need to get deferred shading working. Things I'm not able to do: create a rendertarget which holds all lights, merge colorMap, normalMap and lightMap to one finalMap and presenting this to the user.

    Read the article

  • Storing game objects with generic object information

    - by Mick
    In a simple game object class, you might have something like this: public abstract class GameObject { protected String name; // other properties protected double x, y; public GameObject(String name, double x, double y) { // etc } // setters, getters } I was thinking, since a lot of game objects (ex. generic monsters) will share the same name, movement speed, attack power, etc, it would be better to have all that information shared between all monsters of the same type. So I decided to have an abstract class "ObjectData" to hold all this shared information. So whenever I create a generic monster, I would use the same pre-created "ObjectData" for it. Now the above class becomes more like this: public abstract class GameObject { protected ObjectData data; protected double x, y; public GameObject(ObjectData data, double x, double y) { // etc } // setters, getters public String getName() { return data.getName(); } } So to tailor this specifically for a Monster (could be done in a very similar way for Npcs, etc), I would add 2 classes. Monster which extends GameObject, and MonsterData which extends ObjectData. Now I'll have something like this: public class Monster extends GameObject { public Monster(MonsterData data, double x, double y) { super(data, x, y); } } This is where my design question comes in. Since MonsterData would hold data specific to a generic monster (and would vary with what say NpcData holds), what would be the best way to access this extra information in a system like this? At the moment, since the data variable is of type ObjectData, I'll have to cast data to MonsterData whenever I use it inside the Monster class. One solution I thought of is this, but this might be bad practice: public class Monster extends GameObject { private MonsterData data; // <- this part here public Monster(MonsterData data, double x, double y) { super(data, x, y); this.data = data; // <- this part here } } I've read that for one I should generically avoid overwriting the underlying classes variables. What do you guys think of this solution? Is it bad practice? Do you have any better solutions? Is the design in general bad? How should I redesign this if it is? Thanks in advanced for any replies, and sorry about the long question. Hopefully it all makes sense!

    Read the article

  • Booting Ubuntu on HP Pavilion g7 - 13.04 [duplicate]

    - by death2040
    This question already has an answer here: My computer boots to a black screen, what options do I have to fix it? 24 answers I have a HP Pavilion G7 with an AMD A4 processor and Radeon graphics. I want to install Ubuntu on my laptop but whenever I put the Ubuntu live CD in it and boot to it, the screen shows the Ubuntu logo and the four little dots then after about a minute or two the screen goes black. I can tell the screen is still on but it doesn't have anything on it. I'm beginning to wonder if its a driver problem but I can't really install the drivers when I cant even get Ubuntu to show anything except a loading screen. I've already tried using 12.04 and 12.10 and all the others down to Ubuntu 10. none of them worked. All the other versions don't even show the Ubuntu logo. I'd prefer to have Ubuntu 13.04 on it if its possible but I haven't had any luck finding a solution. I've also tried using WUBI installer in Windows 7 but all that did was make my computer slower for windows and it does the same with the screen when i boot it to Ubuntu. I'm trying to use Ubuntu alongside Windows 7. I cant find any solution on Google. It wont load anything and I know that there is a program called grub on Ubuntu that I used on my desktop computer when it had graphics trouble but the trouble with my desktop was minor things like the screen would flash and then show weird patterns on the screen. But I can't find anything on what to do with the HP laptop. Please help. I use this laptop a lot for games on Windows 7 and I just want to use Ubuntu for when I take my laptop to school and for school stuff. Edit: I just tried booting it in nomodeset and some other things and still didn't work. It did boot up but now when it goes to install alongside windows it crashes and says Ubuntu is forcing reboot or something like that Also, this question is different from the black screen at boot issue because when I do use nomodeset on my computer and select install Ubuntu it will go as far as the screen where you can choose to replace Windows or run alongside Windows. Then after I click continue it ejects the live CD and turns off my computer without installing anything. The error message it shows when it ejects the disk says signal 15, shutting down - modem manager [1675]: <info> Caught nm-dispatcher.action: Caught signal 15, shutting down... *Deconfiguring network interfaces... Please remove installation media and close the tray (if any) then press ENTER *Deactivating swap... *Stopping remaining crypto disks... *stopping early crypto disks... unmount: /run/lock: not mounted unmount: /run/shm: not mounted

    Read the article

  • Samba fails to install

    - by jschoen
    I am running XBMC, which is built around Ubuntu 10.04. It does not come with samba pre-installed, and I need to share some media with a couple other boxes. I followed the Think Geek directions found here. I had it all set up a couple days ago, and thought I was in the clear. I rebooted this evening and when it came back up Samba was not started. I determined this by trying access the samba shares, and it would return there was an connecting to the server. I can ssh into it, so I know it is connected. In my inifinite wisdom, I figured I just messed something up and would just uninstall and reinstall. So I did: sudo apt-get purge samba and sudo apt-get purge smbfs. Then tried to follow the tutorial above again. The what I get after running sudo apt-get install samba smbfs is Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: openbsd-inetd inet-superserver smbldap-tools ldb-tools ufw smbclient The following NEW packages will be installed: samba smbfs 0 upgraded, 2 newly installed, 0 to remove and 5 not upgraded. Need to get 0B/8,131kB of archives. After this operation, 22.6MB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package samba. (Reading database ... 57098 files and directories currently installed.) Unpacking samba (from .../samba_2%3a3.4.7~dfsg-1ubuntu3.2_i386.deb)... Selecting previously deselected package smbfs. Unpacking smbfs (from .../smbfs_2%3a3.4.7~dfsg-1ubuntu3.2_i386.deb) ... Processing triggers for ureadahead ... Setting up samba (2:3.4.7~dfsg-1ubuntu3.2) ... Generating /etc/default/samba... update-alternatives: using /usr/bin/smbstatus.samba3 to provide /usr/bin/smbstatus (smbstatus) in auto mode. smbd start/running, process 2963 **start: Job failed to start** Setting up smbfs (2:3.4.7~dfsg-1ubuntu3.2) ... The bold is my own emphasis. So I am not sure what I messed up here, or how to get back to where it was. Though I am pretty sure I made it worse than it is. I found where the logs are located, /var/logs, and found this line that seems to be the culprit. Jan 29 11:59:34 XBMCLive smbd[2806]: error opening config file So it seems to not create the configuration files. Is there a way to get samba to try to recreate them again?

    Read the article

  • Own a KINECT for MS-XBOX before anyone does

    Following is the announced by Richik Nandi from Microsoft team. Dear Customer, We believe that our privileged customers shouldn't have to wait for good things. So, here's a special offer exclusively for you. Be one of the first in India to own and experience Kinect for XBOX 360, few days before it is even launched in stores. Introducing the new Kinect for XBOX 360®. Kinect needs no controllers. You are the controller. Kinect brings games and entertainment to life in extraordinary new ways without using a controller. The sensor recognizes your face, eyes and body movements to deliver a superb gaming experience. Easy to use and great fun, Kinect gets everyone off the couch. See a ball? Kick it. Want to join a friend in the fun? Simply jump in. Imagine controlling movies and music with the wave of a hand or the sound of your voice! Kinect is all about fun for you and your family. And the best part is Kinect works with every Xbox 360®. There are two options you can choose from: •  Kinect sensor + 4GB Xbox 360 bundle + Kinect Adventures game at Rs 22,990/-and get Dance Central game worth Rs 1999 from Redington, 20% discount voucher from Starwood on food and beverages, T-shirt from PUMA and a Kinect adventure live card absolutely free using your unique promo code : XbTXXZl2Sb •  Kinect Sensor at Rs 9,500/-and get 20% discount voucher from Starwood on food and beverages, T-shirt from PUMA and a Kinect adventure live card absolutely free using your unique promo code : lDg6o8SuYh We want you to own your Kinect before the official launch. The promotion closes by 10th November. To know more about Kinect click here. To book your Kinect PRE-ORDER now! Enter your details along with the above mentioned promo code to avail of the free gifts offer. We will have your Kinect delivered by 19th November 2010. Enjoy being the controller. Enjoy the Kinect. span.fullpost {display:none;}

    Read the article

  • Data structures for a 2D multi-layered and multi-region map?

    - by DevilWithin
    I am working on a 2D world editor and a world format subsequently. If I were to handle the game "world" being created just as a layered set of structures, either in top or side views, it would be considerably simple to do most things. But, since this editor is meant for 3rd parties, I have no clue how big worlds one will want to make and I need to keep in mind that eventually it will become simply too much to check, handling and comparing stuff that are happening completely away from the player position. I know the solution for this is to subdivide my world into sub regions and stream them on the fly, loading and unloading resources and other data. This way I know a virtually infinite game area is achievable. But, while I know theoretically what to do, I really have a few questions I'd hoped to get answered for some hints about the topic. The logic way to handle the regions is some kind of grid, would you pick evenly distributed blocks with equal sizes or would you let the user subdivide areas by taste with irregular sized rectangles? In case of even grids, would you use some kind of block/chunk neighbouring system to check when the player transposes the limit or just put all those in a simple array? Being a region a different data structure than its owner "game world", when streaming a region, would you deliver the objects to the parent structures and track them for unloading later, or retain the objects in each region for a more "hard-limit" approach? Introducing the subdivision approach to the project, and already having a multi layered scene graph structure on place, how would i make it support the new concept? Would you have the parent node have the layers as children, and replicate in each layer node, a node per region? Or the opposite, parent node owns all the regions possible, and each region has multiple layers as children? Or would you just put the region logic outside the graph completely(compatible with the first suggestion in Q.3) When I say virtually infinite worlds, I mean it of course under the contraints of the variable sizes and so on. Using float positions, a HUGE world can already be made. Do you think its sane to think beyond that? Because I think its ok to stick to this limit since it will never be reached so easily.. As for when to stream a region, I'm implementing it as a collection of watcher cameras, which the streaming system works with to know what to load/unload. The problem here is, i will be needing some kind of warps/teleports built in for my game, and there is a chance i will be teleporting a player to a unloaded region far away. How would you approach something like this? Is it sane to load any region to memory which can be teleported to by a warp within a radius from the player? Sorry for the huge question, any answers are helpful!

    Read the article

  • How to suggest using an ORM instead of stored procedures?

    - by Wayne M
    I work at a company that only uses stored procedures for all data access, which makes it very annoying to keep our local databases in sync as every commit we have to run new procs. I have used some basic ORMs in the past and I find the experience much better and cleaner. I'd like to suggest to the development manager and rest of the team that we look into using an ORM Of some kind for future development (the rest of the team are only familiar with stored procedures and have never used anything else). The current architecture is .NET 3.5 written like .NET 1.1, with "god classes" that use a strange implementation of ActiveRecord and return untyped DataSets which are looped over in code-behind files - the classes work something like this: class Foo { public bool LoadFoo() { bool blnResult = false; if (this.FooID == 0) { throw new Exception("FooID must be set before calling this method."); } DataSet ds = // ... call to Sproc if (ds.Tables[0].Rows.Count > 0) { foo.FooName = ds.Tables[0].Rows[0]["FooName"].ToString(); // other properties set blnResult = true; } return blnResult; } } // Consumer Foo foo = new Foo(); foo.FooID = 1234; foo.LoadFoo(); // do stuff with foo... There is pretty much no application of any design patterns. There are no tests whatsoever (nobody else knows how to write unit tests, and testing is done through manually loading up the website and poking around). Looking through our database we have: 199 tables, 13 views, a whopping 926 stored procedures and 93 functions. About 30 or so tables are used for batch jobs or external things, the remainder are used in our core application. Is it even worth pursuing a different approach in this scenario? I'm talking about moving forward only since we aren't allowed to refactor the existing code since "it works" so we cannot change the existing classes to use an ORM, but I don't know how often we add brand new modules instead of adding to/fixing current modules so I'm not sure if an ORM is the right approach (too much invested in stored procedures and DataSets). If it is the right choice, how should I present the case for using one? Off the top of my head the only benefits I can think of is having cleaner code (although it might not be, since the current architecture isn't built with ORMs in mind so we would basically be jury-rigging ORMs on to future modules but the old ones would still be using the DataSets) and less hassle to have to remember what procedure scripts have been run and which need to be run, etc. but that's it, and I don't know how compelling an argument that would be. Maintainability is another concern but one that nobody except me seems to be concerned about.

    Read the article

  • Setting up port forwarding for 7000 appliance VM in VirtualBox

    - by uejio
    I've been using the 7000 appliance VM for a lot of testing lately and relied on others to set up the networking for the VM for me, but finally, I decided to take the dive and do it myself.  After some experimenting, I came up with a very brief number of steps to do this all using the VirtualBox CLI instead of the GUI. First download the VM image and unpack it somewhere.  I put it in /var/tmp. Then, set your VBOX_USER_HOME to some place with lots of disk space and import the VM: export VBOX_USER_HOME=/var/tmp/MyVirtualBoxVBoxManage import /var/tmp/simulator/vbox-2011.1.0.0.1.1.8/Sun\ ZFS\ Storage\ 7000.ovf (go get a cup of tea...) Then, set up port forwarding of the VM appliance BUI and shell:First set up port as NAT:VBoxManage modifyvm Sun_ZFS_Storage_7000 --nic1 nat Then set up rules for port forwarding (pick some unused port numbers):VBoxManage modifyvm Sun_ZFS_Storage_7000 --natpf1 "guestssh,tcp,,4622,,22"VBoxManage modifyvm Sun_ZFS_Storage_7000 --natpf1 "guestbui,tcp,,46215,,215" Verify the settings using:VBoxManage showvminfo Sun_ZFS_Storage_7000 | grep -i nic Start the appliance:$ VBoxHeadless --startvm Sun_ZFS_Storage_7000 & Connect to it using your favorite RDP client.  I use a Sun Ray, so I use the Sun Ray Windows Connector client: $ /opt/SUNWuttsc/bin/uttsc -g 800x600 -P <portnumber> <your-hostname> & The portnumber is displayed in the output of the --startvm command.(This did not work after I updated to VirtualBox 4.1.12, so maybe at this point, you need to use the VirtualBox GUI.) It takes a while to first bring up the VM, so please be patient. The longest time is in loading the smf service descriptions, but fortunately, that only needs to be done the first time the VM boots.  There is also a delay in just booting the appliance, so give it some time. Be sure to set the NIC rule on only one port and not all ports otherwise there will be a conflict in ports and it won't work. After going through the initial configuration screen, you can connect to it using ssh or your browser: ssh -p 45022 root@<your-host-name> https://<your-host-name>:45215 BTW, for the initial configuration, I only had to set the hostname and password.  The rest of the defaults were set by VirtualBox and seemed to work fine.

    Read the article

  • Consolidating Oracle E-Business Suite R12 on Oracle's SPARC SuperCluster

    - by Giri Mandalika
    An Optimized Solution for Oracle E-Business Suite (EBS) R12 12.1.3 is now available on oracle.com.     The Oracle Optimized Solution for Oracle E-Business Suite This solution was centered around the engineered system, SPARC SuperCluster T4-4. Check the business and technical white papers along with a bunch of relevant useful resources online at the above optimized solution page for EBS. What is an Optimized Solution? Oracle's Optimized Solutions are designed, tested and fully documented architectures that are tuned for optimal performance and availability. Optimized solutions are NOT pre-packaged, fully tuned, ready-to-install software bundles that can be downloaded and installed. An optimized solution is usually a well documented architecture that was thoroughly tested on a target platform. The technical white paper details the deployed application architecture along with various observations from installing the application on target platform to its behavior and performance in highly available and scalable configurations. Oracle E-Business Suite R12 Use Case Multiple E-Business Suite R12 12.1.3 application modules were tested in this optimized solution -- Financials (online - oracle forms & web requests), Order Management (online - oracle forms & web req uests) and HRMS (online - web requests & payroll batch). The solution will be updated with additional application modules, when they are available. Oracle Solaris Cluster is responsible for the high availability portion of the solution. Performance Data For the sake of completeness, test results were also documented in the optimized solution white paper. Those test results are mainly for educational purposes only. They give good sense of application behavior under the circumstances the application was tested. Since the major focus of the optimized solution is around highly available and scalable configurations, the application was configured to me et those criteria. Hence the documented test results are not directly comparable to any other E-Business Suite performance test results published by any vendor including Oracle. Such an attempt may lead to skewed, incorrect conclusions. Questions & Requests Feel free to direct your questions to the author of the white papers. If you are a potential customer who would like to test a specific E-Business Suite application module on any non-engineered syste m such as SPARC T4-X or engineered system such as SPARC SuperCluster, contact Oracle Solution Center.

    Read the article

  • Cloud Computing: Start with the problem

    - by BuckWoody
    At one point in my life I would build my own computing system for home use. I wanted a particular video card, a certain set of drives, and a lot of memory. Not only could I not find those things in a vendor’s pre-built computer, but those were more expensive – by a lot. As time moved on and the computing industry matured, I actually find that I can buy a vendor’s system as cheaply – and in some cases far more cheaply – than I can build it myself.   This paradigm holds true for almost any product, even clothing and furniture. And it’s also held true for software… Mostly. If you need an office productivity package, you simply buy one or use open-sourced software for that. There’s really no need to write your own Word Processor – it’s kind of been done a thousand times over. Even if you need a full system for customer relationship management or other needs, you simply buy one. But there is no “cloud solution in a box”.  Sure, if you’re after “Software as a Service” – type solutions, like being able to process video (Windows Azure Media Services) or running a Pig or Hive job in Hadoop (Hadoop on Windows Azure) you can simply use one of those, or if you just want to deploy a Virtual Machine (Windows Azure Virtual Machines) you can get that, but if you’re looking for a solution to a problem your organization has, you may need to mix Software, Infrastructure, and perhaps even Platforms (such as Windows Azure Computing) to solve the issue. It’s all about starting from the problem-end first. We’ve become so accustomed to looking for a box of software that will solve the problem, that we often start with the solution and try to fit it to the problem, rather than the other way around.  When I talk with my fellow architects at other companies, one of the hardest things to get them to do is to ignore the technology for a moment and describe what the issues are. It’s interesting to monitor the conversation and watch how many times we deviate from the problem into the solution. So, in your work today, try a little experiment: watch how many times you go after a problem by starting with the solution. Tomorrow, make a conscious effort to reverse that. You might be surprised at the results.

    Read the article

  • Zenoss Setup for Windows Servers

    - by Jay Fox
    Recently I was saddled with standing up Zenoss for our enterprise.  We're running about 1200 servers, so manually touching each box was not an option.  We use LANDesk for a lot of automated installs and patching - more about that later.The steps below may not necessarily have to be completed in this order - it's just the way I did it.STEP ONE:Setup a standard AD user.  We want to do this so there's minimal security exposure.  Call the account what ever you want "domain/zenoss" for our examples.***********************************************************STEP TWO:Make the following local groups accessible by your zenoss account.Distributed COM UsersPerformance Monitor UsersEvent Log Readers (which doesn't exist on pre-2008 machines)Here's the Powershell script I used to setup access to these local groups:# Created to add Active Directory account to local groups# Must be run from elevated prompt, with permissions on the remote machine(s).# Create txt file should contain the names of the machines that need the account added, one per line.# Script will process machines line by line.foreach($i in (gc c:\tmp\computers.txt)){# Add the user to the first group$objUser=[ADSI]("WinNT://domain/zenoss")$objGroup=[ADSI]("WinNT://$i/Distributed COM Users")$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)# Add the user to the second group$objUser=[ADSI]("WinNT://domain/zenoss")$objGroup=[ADSI]("WinNT://$i/Performance Monitor Users")$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)# Add the user to the third group - Group doesn't exist on < Server 2008#$objUser=[ADSI]("WinNT://domain/zenoss")#$objGroup=[ADSI]("WinNT://$i/Event Log Readers")#$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)}**********************************************************STEP THREE:Setup security on the machines namespace so our domain/zenoss account can access itThe default namespace for zenoss is:  root/cimv2Here's the Powershell script:#Grant account defined below (line 11) access to WMI Namespace#Has to be run as account with permissions on remote machinefunction get-sid{Param ($DSIdentity)$ID = new-object System.Security.Principal.NTAccount($DSIdentity)return $ID.Translate( [System.Security.Principal.SecurityIdentifier] ).toString()}$sid = get-sid "domain\zenoss"$SDDL = "A;;CCWP;;;$sid" $DCOMSDDL = "A;;CCDCRP;;;$sid"$computers = Get-Content "c:\tmp\computers.txt"foreach ($strcomputer in $computers){    $Reg = [WMIClass]"\\$strcomputer\root\default:StdRegProv"    $DCOM = $Reg.GetBinaryValue(2147483650,"software\microsoft\ole","MachineLaunchRestriction").uValue    $security = Get-WmiObject -ComputerName $strcomputer -Namespace root/cimv2 -Class __SystemSecurity    $converter = new-object system.management.ManagementClass Win32_SecurityDescriptorHelper    $binarySD = @($null)    $result = $security.PsBase.InvokeMethod("GetSD",$binarySD)    $outsddl = $converter.BinarySDToSDDL($binarySD[0])    $outDCOMSDDL = $converter.BinarySDToSDDL($DCOM)    $newSDDL = $outsddl.SDDL += "(" + $SDDL + ")"    $newDCOMSDDL = $outDCOMSDDL.SDDL += "(" + $DCOMSDDL + ")"    $WMIbinarySD = $converter.SDDLToBinarySD($newSDDL)    $WMIconvertedPermissions = ,$WMIbinarySD.BinarySD    $DCOMbinarySD = $converter.SDDLToBinarySD($newDCOMSDDL)    $DCOMconvertedPermissions = ,$DCOMbinarySD.BinarySD    $result = $security.PsBase.InvokeMethod("SetSD",$WMIconvertedPermissions)     $result = $Reg.SetBinaryValue(2147483650,"software\microsoft\ole","MachineLaunchRestriction", $DCOMbinarySD.binarySD)}***********************************************************STEP FOUR:Get the SID for our zenoss account.Powershell#Provide AD User get SID$objUser = New-Object System.Security.Principal.NTAccount("domain", "zenoss") $strSID = $objUser.Translate([System.Security.Principal.SecurityIdentifier]) $strSID.Value******************************************************************STEP FIVE:Modify the Service Control Manager to allow access to the zenoss AD account.This command can be run from an elevated command line, or through Powershellsc sdset scmanager "D:(A;;CC;;;AU)(A;;CCLCRPRC;;;IU)(A;;CCLCRPRC;;;SU)(A;;CCLCRPWPRC;;;SY)(A;;KA;;;BA)(A;;CCLCRPRC;;;PUT_YOUR_SID_HERE_FROM STEP_FOUR)S:(AU;FA;KA;;;WD)(AU;OIIOFA;GA;;;WD)"******************************************************************In step two the script plows through a txt file that processes each computer listed on each line.  For the other scripts I ran them on each machine using LANDesk.  You can probably edit those scripts to process a text file as well.That's what got me off the ground monitoring the machines using Zenoss.  Hopefully this is helpful for you.  Watch the line breaks when copy the scripts.

    Read the article

  • Checking if an Unloaded Collection Contains Elements

    - by Ricardo Peres
    If you want to know if an unloaded collection in an entity contains elements, or count them, without actually loading them, you need to use a custom query; that is because the Count property (if the collection is not mapped with lazy=”extra”) and the LINQ Count() and Any() methods force the whole collection to be loaded. You can use something like these two methods, one for checking if there are any values, the other for actually counting them: 1: public static Boolean Exists(this ISession session, IEnumerable collection) 2: { 3: if (collection is IPersistentCollection) 4: { 5: IPersistentCollection col = collection as IPersistentCollection; 6:  7: if (col.WasInitialized == false) 8: { 9: String[] roleParts = col.Role.Split('.'); 10: String ownerTypeName = String.Join(".", roleParts, 0, roleParts.Length - 1); 11: String ownerCollectionName = roleParts.Last(); 12: String hql = "select 1 from " + ownerTypeName + " it where it.id = :id and exists elements(it." + ownerCollectionName + ")"; 13: Boolean exists = session.CreateQuery(hql).SetParameter("id", col.Key).List().Count == 1; 14:  15: return (exists); 16: } 17: } 18:  19: return ((collection as IEnumerable).OfType<Object>().Any()); 20: } 21:  22: public static Int64 Count(this ISession session, IEnumerable collection) 23: { 24: if (collection is IPersistentCollection) 25: { 26: IPersistentCollection col = collection as IPersistentCollection; 27:  28: if (col.WasInitialized == false) 29: { 30: String[] roleParts = col.Role.Split('.'); 31: String ownerTypeName = String.Join(".", roleParts, 0, roleParts.Length - 1); 32: String ownerCollectionName = roleParts.Last(); 33: String hql = "select count(elements(it." + ownerCollectionName + ")) from " + ownerTypeName + " it where it.id = :id"; 34: Int64 count = session.CreateQuery(hql).SetParameter("id", col.Key).UniqueResult<Int64>(); 35:  36: return (count); 37: } 38: } 39:  40: return ((collection as IEnumerable).OfType<Object>().Count()); 41: } Here’s how: 1: MyEntity entity = session.Load(100); 2:  3: if (session.Exists(entity.SomeCollection)) 4: { 5: Int32 count = session.Count(entity.SomeCollection); 6: //... 7: }

    Read the article

  • Are there any concerns with using a static read-only unit of work so that it behaves like a cache?

    - by Rowan Freeman
    Related question: How do I cache data that rarely changes? I'm making an ASP.NET MVC4 application. On every request the security details about the user will need to be checked with the area/controller/action that they are accessing to see if they are allowed to view it. The security information is stored in the database. For example: User Permission UserPermission Action ActionPermission A "Permission" is a token that is applied to an MVC action to indicate that the token is required in order to access the action. Once a user is given the permission (via the UserPermission table) then they have the token and can therefore access the action. I've been looking in to how to cache this data (since it rarely changes) so that I'm only querying in-memory data and not hitting a database (which is a considerable performance hit at the moment). I've tried storing things in lists, using a caching provider but I either run in to problems or performance doesn't improve. One problem that I constantly run in to is that I'm using lazy loading and dynamic proxies with EntityFramework. This means that even if I ToList() everything and store them somewhere static, the relationships are never populated. For example, User.Permissions is an ICollection but it's always null. I don't want to Include() everything because I'm trying to keep things simple and generic (and easy to modify). One thing I know is that an EntityFramework DbContext is a unit of work that acts with 1st-level caching. That is, for the duration of the unit of work, everything that is accessed is cached in memory. I want to create a read-only DbContext that will exist indefinitely and will only be used to read about permission data. Upon testing this it worked perfectly; my page load times went from 200ms+ to 20ms. I can easily force the data to refresh at certain intervals or simply leave it to refresh when the application pool is recycled. Basically it will behave like a cache. Note that the rest of the application will interact with other contexts that exist per request as normal. Is there any disadvantage to this approach? Could I be doing something different?

    Read the article

  • Visual studio Real Dark mode (2010,2012,2013)

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/02/visual-studio-real-dark-mode-201020122013.aspxWhen Visual studio 2010 released back in 3 year ago I soon show a demo to some people that how Dark mode of Visual studio will be great idea. Soon we got some theme plugin  which make us able to modify the look of visual studio.   http://studiostyl.es/ already provide lots of wonderful color scheme that make you able to modify the theme. These themes are also work in webmatrix 2.  Webmatrix 2 have a plugin for themes that is made by Yishai Galatzer that is awesome for webmatrix 2.   In Visual studio 2012 we got a native dark mode. This means we can configure it without any plugin or requirement of anything. In this post I have a demo to show you how to use Dark mode that is part of Windows 7 (and windows 8 too).   Few months ago I show a problem that webmatrix 2 run slow. it’s run better in windows 7 dark mode. Windows 7 dark mode simply refer to right click > personalize > High contrast theme in bottom of windows. This setting make thing a little bit faster.   When you have set this you have seen that Visual studio doesn’t react good anymore because it’s color scheme is broken now. What you need now is import any theme from http://studiostyl.es/ When you import this this will look good as this.   This is the demo look of Windows 7 phone Express 2010. It will react same for future version as 2012, 2013. Now see your VS react look dark. Everything is dark now. Your Firefox and IE will not run totally in blackish mode but you can use chrome. Chrome have less effect of dark. Now if you benchmark it then you will feel that everything that take a long time in loading now run fast.   Note :- This is experiments. Remember to have settings backup before apply new theme. All thing I do is make my VS run faster. If you have any trouble or idea please comment it.   Thanks for read my post

    Read the article

  • A brief note for customers running SOA Suite on AIX platforms

    - by christian
    When running Oracle SOA Suite with IBM JVMs on the AIX platform, we have seen performance slowdowns and/or memory leaks. On occasion, we have even encountered some OutOfMemoryError conditions and the concomittant Java coredump. If you are experiencing this issue, the resolution may be to configure -Dsun.reflect.inflationThreshold=0 in your JVM startup parameters. https://www.ibm.com/developerworks/java/library/j-nativememory-aix/ contains a detailed discussion of the IBM AIX JVM memory model, but I will summarize my interpretation and understanding of it in the context of SOA Suite, below. Java ClassLoaders on IBM JVMs are allocated a native memory area into which they are anticipated to map such things as jars loaded from the filesystem. This is an excellent memory optimization, as the file can be loaded into memory once and then shared amongst many JVMs on the same host, allowing for excellent horizontal scalability on AIX hosts. However, Java ClassLoaders are not used exclusively for loading files from disk. A performance optimization by the Oracle Java language developers enables reflectively accessed data to optimize from a JNI call into Java bytecodes which are then amenable to hotspot optimizations, amongst other things. This performance optimization is called inflation, and it is executed by generating a sun.reflect.DelegatingClassLoader instance dynamically to inject the Java bytecode into the virtual machine. It is generally considered an excellent optimization. However, it interacts very negatively with the native memory area allocated by the IBM JVM, effectively locking out memory that could otherwise be used by the Java process. SOA Suite and WebLogic are both very large users of reflection code. They reflectively use many code paths in their operation, generating lots of DelegatingClassLoaders in normal operation. The IBM JVM slowdown and subsequent OutOfMemoryError are as a direct result of the Java memory consumed by the DelegatingClassLoader instances generated by SOA Suite and WebLogic. Java garbage collection runs more frequently to try and keep memory available, until it can no longer do so and throws OutOfMemoryError. The setting sun.reflect.inflationThreshold=0 disables this optimization entirely, never allowing the JVM to generate the optimized reflection code. IBM JVMs are susceptible to this issue primarily because all Java ClassLoaders have this native memory allocation, which is shared with the regular Java heap. Oracle JVMs don't automatically give all ClassLoaders a native memory area, and my understanding is that jar files are never mapped completely from shared memory in the same way as IBM does it. This results in different behaviour characteristics on IBM vs Oracle JVMs.

    Read the article

  • Black screen on latest Nvidia Cards when starting LightDM/Ubuntu

    - by Luis Alvarado
    Today I installed an Nvidia GT440 on my computer, changing the one that existed there, an Nvidia 9500GT. After changing it I started getting a problem where the screen just went black when loading the lightdm login screen (Where I punt my user and password). The thing is, if I disconnect the VGA cable and connect it again I get to see the lightdm greeter and everything works perfect. The problem is that I have to connect/disconnect every time I reboot the PC. I tried installing the 285.xx drivers. Same problem. I removed the Nvidia drivers installed with Jockey, rebooted, same problem. I install the current 280.xx again, same problem. After all that I installed a fresh install of Ubuntu, selected to install the Nvidia drivers while installing it from the livecd. After booting the same problem appeared. Dmesg does not say anything wrong about it. Same goes for the log from Jockey. What else should I check or what to do to solve it. Just to clarify, this does not happen BEFORE the lightdm greeter appears. Am guessing before the actual use of the video card with X starts with all the 2D/3D stuff that is used in ligthdm and unity. I can use any tty and even see the Ubuntu logo when starting. UPDATE: When I open a game in fullscreen the problem appears again. I have to unplug the monitor cable and plug it back in to see the game. Then when I quit the game I have to do it again to see the desktop. UPDATE 2: Today I bought a HDMI cable, connected the video card to the TV am testing it with and it actually did log in correctly without any black screen but it shows the resolution a little over the actual size of the screen. So I see only half of the launcher since the left side of it is hidden outside of the real resolution and the top bar is beyond the resolution. So the black screen is related to the VGA connection.

    Read the article

  • Misadventures at Radio Shack

    - by Chris Williams
    While I'm waiting for my Arduino kits to show up, I started reading the Getting Started With Arduino book from O'Reilly (review coming later) and I'm about 40 pages in when I get to a parts list for one of the first projects. Looks pretty straightforward, and even has Radio Shack part numbers next to almost everything. So on my lunch today, I decided to run out to "The Shack" (seriously, that's their rebranding?) to pick up some basics, like a couple resistors, a breadboard, a momentary switch and a pack of pre-cut jumper wires. I found the resistors without any difficulty, and while they didn't have the exact switch I wanted, it was easy enough to find one that would do. That's where my good luck abruptly ended. I was surprised that I couldn't find a breadboard or any jumper wires, so while I was looking around, a guy came up and asked me if I needed some help. I told him I did, explained what I needed and even gave him the Radio Shack part number for the pack of jumper wires. After a couple minutes he says he can't find anything in the system, which was unfortunate but not the end of the world.  So then I asked him about the breadboard, and he pointed me to some blank circuitboards (which are not the same thing) and I said (nicely) that those weren't breadboards and attempted to explain (again) what I needed, at which point he says to me "I don't even know what the hell you're looking for!" I stood there for a moment and tried to process his words. About that time, another salesperson came up and asked what I was trying to find. I told her I needed a breadboard, and she pointed to the blank circuit boards and said "they're right in front of you..." After seeing the look on my face, she thought for a minute and said... "OH! you mean those white things. We don't have those anymore." I thanked her, set everything down on the counter and left. (I wasn't going to buy only half the stuff I needed.. and I was pretty sure I was never going to be buying ANYTHING at that particular location ever again.) Guess I'll be ordering more stuff online at this point. It's a shame really, because I used to LOVE going to Radio Shack as a kid, and looking through all the cool electronics components and stuff, even if I didnt understand what most of them were at the time. Seems like the only thing they carry in any quantity/variety now is cell phones and random stereo connectors.

    Read the article

  • Oracle Database In-Memory

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE Larry Ellison unveiled the next major milestone in database technology, Oracle Database In-Memory, on June 10, 2014. Oracle Database In-Memory will be generally available in July 2014 and can be used with all hardware platforms on which Oracle Database 12c is supported. This option will accelerate database performance by orders of magnitude for analytics, data warehousing, and reporting while also speeding up online transaction processing (OLTP). It allows any existing Oracle Database-compatible application to automatically and transparently take advantage of columnar in-memory processing, without additional programming or application changes. Benefits Fast ad-hoc analytics without the need to pre-create indexes Completely transparent to existing applications Faster mixed workload OLTP No database size limit Industrial strength availability and security Robustness and maturity of Oracle Database 12c To find out more see Oracle Database In-Memory Comment from Rittman Mead on Oracle In-Memory Option Launch  ... and I will let you know how this unfolds in regards to advantages for OBI11g and Exalytics and Big Data over the coming months. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Updating My Online Boggle Solver Using jQuery Templates and WCF

    With WebForms, each ASP.NET page's rendered output includes a <form> element that performs a postback to the same page whenever a Button control within the form is clicked, or whenever the user modifies a control whose AutoPostBack property is set to True. This model simplifies web page development, but carries with it some costs - namely, the large amount of data exchanged between the client and the server during a postback. On postback the browser sends the values of all of its form fields (including hidden ones, like view state, which may be quite large) to the server; the server then sends back the entire contents of the web page. While there are some scenarios where this amount of information needs to be exchanged, in many cases the user has performed some action that requires far less information to be exchanged. With a little bit of forethought and code we can have the browser and server exchange much less data, which leads to more responsive web pages and an improved user experience. Over the past several weeks I've been writing an article series on accessing server-side data from client script. Rather than rely solely on forms and postbacks, many websites use JavaScript code to asynchronously communicate with the server in response to the page loading or some other user action. The server, upon receiving the JavaScript-initiated request, returns just the data needed by the browser, which the browser then seamlessly integrates into the web page. There are a variety of technologies and techniques that can be employed to provide both the needed server- and client-side functionality. Last week's article, Using WCF Services with jQuery and the ASP.NET Ajax Library, explored using the Windows Communication Foundation, or WCF, to serve data from the web server and showed how to consume such a service using both the ASP.NET Ajax Library and jQuery. In a previous 4Guys article, Creating an Online Boggle Solver, I built an application to find all solutions in a game of Boggle. (Boggle is a word game trademarked by Parker Brothers and Hasbro that involves several players trying to find as many words as they can in a 4x4 grid of letters.) This article takes the lessons learned in Using WCF Services with jQuery and the ASP.NET Ajax Library and uses them to update the user interface for my online Boggle solver, replacing the existing WebForms-based user interface with a more modern and responsive interface. I also used jQuery Templates, a JavaScript-based templating library that is useful for displaying the results from a server-side service. Read on to learn more! Read More >

    Read the article

  • Sync Google Calendar with SharePoint Calendar

    - by dataintegration
    The ADO.NET Providers for Google and SharePoint make it easy to retrieve and update data in both Google's web services and SharePoint. This article shows how the SQL interface to data makes it easy to build applications that need to move data from one source to another. The application described here is a demo Windows application that synchronizes calendar events between Google and SharePoint, but the RSSBus Providers can be used to achieve integrations on both the .NET and the Java platforms, including more sophisticated features like full automation. Getting the Events Step 1: Google accounts can have several calendars. Obtain a list of a user's Google Calendars by issuing a query to the Calendars table. For example: SELECT * FROM Calendars. Step 2: In order to get a list of the events from a given Google Calendar, issue a query to the CalendarEvents table while specifying the CalendarId from the Calendars table. The resulting events can be further filtered by using the StartDateTime or EndDateTime columns. For example: SELECT * FROM CalendarEvents WHERE (CalendarId = '[email protected]') AND (StartDateTime >= '1/1/2012') AND (StartDateTime <= '2/1/2012') Step 3: SharePoint stores data in Lists. There are various types of lists, e.g., document lists and calendar lists. A SharePoint account can have several lists of the same type. To find all the calendar lists in SharePoint, use the ListLists stored procedure and inspect the BaseTemplate column. Step 4: The SharePoint data provider models each SharPoint list as a table. Get the events in a particular calendar by querying the table with the same name as the list. The events may be filtered further by specifying the EventDate or EndDate columns. For example: SELECT * FROM Calendar WHERE (EventDate >= '1/1/2012') AND (EventDate <= '2/1/2012') Synchronizing the Events Synchronizing the events is a simple process. Once the events from Google and SharePoint are available they can be compared and synchronized based on user preference. The sample application does this based on user input, but it is easy to create one that does the synchronization automatically. The INSERT, UPDATE, and DELETE statements available in both data providers makes it easy to create, update, or delete events as needed. Pre-Built Demo Application The executable for the demo application can be downloaded here. Note that this demo is built using BETA builds of the ADO.NET Provider for Google V2 and ADO.NET Provider for SharePoint V2, and will expire in 2013. Source Code You can download the full source of the demo application here. You will need the Google ADO.NET Data Provider V2 and the SharePoint ADO.NET Data Provider V2, which can be obtained here.

    Read the article

  • BIP 10.1.3.4.x June 2010 Update Available

    - by Tim Dexter
    A new patchset for 10.1.3.4.0 and 10.1.3.4.1 is available on Metalink. some notes: The patch number is 9791839. This patchset includes 28 new bug fixes since the last patchset release on March 31. This is a culmulative update that includes all the fixes and enhancements from previous updates. The patch will supercede the other two updates. Install instructions are in the readme inside the patch There is also a new BIP client patch available, 9821068. No new template building features to my knowledge but there is an update to the template viewer to allow you to test and debug you siny new Excel templates. Server 8529759XMLP_TEMPLATE_DESIGNER CANNOT SAVE / UPLOAD TEMPLATE 8566455 BI PUBLISHER SCHEDULER DOES NOT START WITH JNDI DATA SOURCE 9295667RESPONSE OF GETSCHEDULEDREPORTINFO RETURNS STATUS AS 'UNKNOWN' INSTEAD OF 'SCHED 9542413 UNABLE TO CREATE A NEW TEMPLATE FROM UI 9546137 EXCEL ANALYZER TEMPLATE FAILS FOR A STRUCTURED XML WHEN IT IS UPLOADED 9556338 SIEBEL - BIP PARAMETERS SORT ORDER 9560562 BI PUBLISHER CACHE DIRECTORY FILLING UP AND POINTING TO INVALID DIRECTORY 9646599 USER ROLE DEFINED AS PRIMARYGROUP IN ACTIVEDIRECTORY GROUP ARE NOT RECOGNIZED 9664768 ER: NEED TO BIND USER ATTRIBUTE VALUES DEFINED IN ACTIVEDIRECTORY IN DATA QUERY 9665075 BI PUBLISHER AFTER 9546699 NOTIFICATIONS FOR REPORTS FAIL 9669973 ER: NEED TO SUPPORT PRE-PROCESSING XML WITH XSL FOR EXCEL TEMPLATE 9704401 ER: NEED TO SUPPORT DEFAULT GROUP FOR ALL USERS IN LDAP/AD SECURITY 9711899 SEARCH PARAMETER IS NOT VISIBLE WHEN SCHEDULE A REPORT 9753736 SOME ROLES FROM ACTIVEDIRECTORY ARE NOT LISTED IN ADMIN ROLE-FOLDER MAPPING 9771354 MULTIPLE PARAMETERS IN 10.1.3.4.1 DATA TEMPLATE ACT ACT DIFFERENTLY FROM 10.1.3. 9772982 "REFRESH OTHER PARAMETERS ON CHANGE" DOESN'T WORK PROPERLY Core  8599646 ER:EXTRA SPACE ADDED BELOW IMAGE IN A TABLE CELL OF TEMPLATE IN FIREFOX 9377593 SOME ROWS HEIGHT IN HTML/EXCEL OUTPUT ARE TOO BIG IN BI PUBLISHER 9487030 NAVIGATION TREE REPEATING TWICE IN PDF DCCUMENT CREATED BY BI PUBLISHER 9509432 PERFORMANCE ISSUE WHEN USING PDF TEMPLATE 9534424 PS: DOCUMENT-REPEAT-FULLPATH-ELEMENTNAME SHOULDNT USE DOT "." AS PATH SEPARATOR 9553360 FORMPROCESSOR CANNOT PARSE SOME PDF TEMPLATES 9554959 TEXT IN AUTOSHAPE IS NOT PROPERLY CUT OFF FOR LINE WRAPPING 9569417 AFTER APPLYING PATCH 9509432 PDF TEMPLATES WITH DBDRV PRODUCE NO OUTPUT 9571670 ER: EXCEL TEMPLATE TO SUPPORT XSLT LOGIC AND XSL CUSTOM EXTENTIONS 9589809 XSL:CALL-TEMPLATE IS MISSING IN GENERATED XSL FILE 9605920 BOOKMARK TESTCASE FAILED DUE TO ER9283933 9689634 PRINT FLOW CHART USING ACROSS 3 DOWN 0 GIVES EXTRA BLANK PAGES You might have noticed some fixes and ehancements to the Excel templates so I can get back on those now. There is a part two to the Mapviewer BIP Mashup coming ... just need aanother 4 hours in the day to squeeze it in.

    Read the article

  • Is MongoDB a good choice or not for my application?

    - by shubham
    I have a Reporting application which stores the reports in xml format as recieved from source (XML schema is not defined, it can be any format) and those reports contain some keys and values. Like jobid, setid be keys for 1 type of report and userid, groupId for another type of report etc. The type of keys that can be referred from the document is determined by the namespaces used in the xml doc. These keys are stored on the basis of namespace used in the xml document. For e.g. If a tag in xml fragment uses namespace= "myspace1", then I have keys A and B for myspace1 stored in another table. It will fetch those keys from that table for this namespace, look for their values in xml doc and store it in another table along with the pointer to this xml document (Id of a record storing complete xml document in a cell). Use cases: When the user comes and queries for that key and value, I return the document or a set of documents that are having those key/value pairs. When the user comes and queries for a certain key and provide a name for xslt (pre stored), I fetch the set of documents fulfilling that criteria and convert that xml to html with the specified xslt. When the user comes and asks for a particular fragment of a doc then it can fetch a subset from a particular document also. When the user comes and queries for top x values of a certain key, I return the set of documents that are having top 10 values of that key. I am using DB2 database for its support of xml along with relational capabilities. That makes easier for me to run xpath expressions and fetch values of keys and also aggregate a set of documents fullfilling a criteria, all on the database side. Problems: DB2 stores XML doc of upto 2GB in size. Retrieval is very slow. If some thing involves many documents, then it takes significant time for things to show up in browser, and the user has to wait. Can MongoDb help in this case, as it is document oriented? can I do xml related xpath queries and document transformations on db side? Or is it ok to use both in such a case?

    Read the article

< Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >