Search Results

Search found 26412 results on 1057 pages for 'product key'.

Page 265/1057 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • The JDEdwards EnterpriseOne PreSales University

    - by Julien Haye
    Istanbul NOV 5-9 Wednesday, NOV 7 - It is raining outside and I am sitting in my hotel room (#106) in Istanbul and create my first blog entry. Today this blog was enabled and I am excited to have the ability to share my (first) thoughts with the EMEA JDE Partner Community. I am here in Istanbul because we are currently running the JDEdwards PreSales University Event series. This PreSales University is an established event series which we deliver the fifth time now and the first time in the ECEMEA region. Delegates value the openness and competence from the Product Strategy and Product Development Team from Denver and India. Together with the regional Oracle PreSales team we had very valuable discussions around product features and functions and about the business value of the new delivered applications and tools. Additionally the event provides endless opportunities to exchange ideas with other JD Edwards Partner and the Oracle PreSales Team. With its focus on sharing and learning, best practice, user experience and transforming technologies, delegates will leave this event with an abundance of new ideas and best practices to try for your coming projects and existing customer implementations. A day out of the office gives delegates a chance to gain a new perspective on their business processes. Everybody sees better ways of working just by being immersed in an environment where the focus is on using products more effectively. Apps Track: Highly concentrated participants in Istanbul listening to Jeff Erickson presenting the news about OneView Reporting. Jeff: We believe “The things you said”. The event is organized into two tracks, one for Apps and one for Tech. Everybody was able to learn new features and functions and how to position this products. The focus was on the new Apps release 9.1 and Tools Release 9.1.2 and their Value Propositions. For all topics hands-on exercises has been given to the participants. Even very experienced senior consultants did learn a lot from this event. In total we have 55 people registered and we still have some more content to deliver. By the way: Istanbul is a nice place to be. I already booked my next trip to this beautiful city. In two weeks we deliver the JD Edwards EECIS Executive Forum again in Istanbul. Once again a tough Agenda. I will let you know if I had the ability to have a walk outside and see a bit more of this beautiful city. At least I expect to have a different room number. Many greetings Hartmut WieseOracle Alliances & Channels EMEA

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • Could my 64-bit server be somehow identifying itself as a 32-bit server?

    - by Deane
    Has anyone ever heard of a 64-bit OS identifying itself as a 32-bit OS? We have a Windows Server 2008 R2 x64 development server. We've been trying to activate it with a product key from MSDN, but it keeps telling us the the key is invalid. I've opened a ticket with MSDN for this. Then something odd happened -- I tried to install a 64-bit version of SQL Server 2005. After it extracted, we got this message: This version of hotfix.exe is not compatible with the version of Windows you're running. Check your computer's system information to see whether you need an x86 (32-bit) or x64 (64-bit) version of the program... Now, we're pretty sure this is a 64-bit OS. Computer Properties says: System Type: 64-bit Operating System Also, we have both a "Program Files" and a "Program Files (x64)" directory. I don't know how the product key activator or the SQL install program attempts to divine the type of OS, but could it be...wrong?

    Read the article

  • Customer Webcast: Alcatel-Lucent Creates a Modern User Experience

    - by [email protected]
    Today, customer satisfaction is critical to a company's long-term success. With customers searching the internet to find new solutions and offerings, it's more important than ever to deliver a modern and engaging user experience that's both interactive and community-based. Join us on June 30th for this exclusive LIVE Webcast with Saeed Hosseiniyar, CIO of Alcatel-Lucent's Enterprise Products Group, and Andy MacMillan, Vice President of Product Management for Oracle's Enterprise 2.0 Solutions. You'll learn how a modern customer service portal with integrated Web 2.0 and social media features can: Improve customer satisfaction by delivering rich, personalized and interactive content Speed product development by facilitating participation and feedback from customers through online communities Improve ROI with a unified platform that delivers content to employees, partners and customers You'll walk away with concrete strategies, best practices and real-world insights on how to transform your company's brand with a next-generation customer service and support site. Register today for this complimentary live Webcast!

    Read the article

  • SQL Down Under Show 51 - Guest Conor Cunningham - Now online

    - by Greg Low
    Late last night I got to record an interview with Conor Cunningham.Most people that know Conor have come across him as the product team wizard that knows so much about query processing and optimization in SQL Server. Conor is currently spending quite a lot of time working on Windows Azure SQL Database, which we used to know as SQL Azure. I'm still trying to think of a good way to say "WASD". I suppose I'll pronounce it like "wassid". Windows Azure SQL Reporting is easier. I think it just needs to be pronounced like "wazza" with a very Australian accent.In the show, we've spent time on the current state of the platform, on dispelling a number of common misbeliefs about the product, and hopefully on answering most of the common questions that seem to get asked about it. We then ventured into Federations, Data Sync, and Reporting.You'll find the show (and previous shows) here: http://www.sqldownunder.com/Resources/Podcast.aspxEnjoy!PS: For those that like transcripts, we've got the process for producing them much improved now and the transcript should also be up within a few days.

    Read the article

  • EPM Architecture: Foundation

    - by Marc Schumacher
    This post is the first of a series that is going to describe the EPM System architecture per component. During the following weeks a couple of follow up posts will describe each component. If applicable, the component will have its standard port next to its name in brackets. EPM Foundation is Java based and consists of two web applications, Shared Services and Workspace. Both applications are accessed by browser through Oracle HTTP Server (OHS) or Internet Information Services (IIS). Communication to the backend database is done by JDBC. The file system to store Lifecycle Management (LCM) artifacts can be either local or remote (e.g. NFS, network share). For authentication purposes, the EPM Product Suite can connect to external directories or databases. Interaction with other EPM Suite components like product specific Lifecycle Management connectors or Reporting and Analysis Web happens through HTTP protocol. The next post will cover Reporting and Analysis.

    Read the article

  • FY11 plans &ndash; how can you increase your SOA business?

    - by Jürgen Kress
    Thanks for a fantastic FY10 was great to work with all of you! Yes with the economic crises the fiscal year was hard. SOA and Oracle Fusion Middleware do address this challenges and can help companies to save cost to integrate their systems, automate and change their processes. More when we publish our fiscal year results. What is on the agenda for FY11? Specialization: It is key that you become SOA & Application Grid Specialized. We will focus our activities and budgets on partners with Specialization! Sales campaigns: To support you in our joint business we will continue to run joint sales campaigns. With OFM 11g there is a great opportunity to generate service revenue to migrate and to consolidate on the platform. It is key that you do register your opportunities within the Open Market Model (OMM) to ensure sales alignment. Enablement. With the release of many new products and versions training is key. We will continue to offer training dedicated to your role: sales, pre-sales and implementation. Make sure that you check local partner training calendars and sign up for the next bootcamps Thanks for your support! Jürgen Kress

    Read the article

  • Where can I get a list of PDF viewer/form-filler components for C#? [closed]

    - by Volomike
    Where can I get a list of recommended PDF viewer and form-filler components that I can buy and install in C#? I query on C# component PDF viewer on Google and get a lot of hits, but I'm not certain what programmers have tried and liked. Background: See, my employer is wanting me to build one in C# as a kind of training exercise, but also to be a product for sales lead generation for another product they sell. The employer is perfectly okay buying something, as long as it's good. I've learned VB6 and PHP, and know a little C and C++, so I'm trying to learn where people get the best-rated addon components for it, and especially for this PDF viewer and form filler thing he wants.

    Read the article

  • Microsoft`s FUSE Labs Unveils Spindex Social Networking Tool

    Microsoft s FUSE Labs has been busy lately with researching and creating new products. One such product was introduced this week at San Francisco s Web 2. Expo. The product is Spindex a social networking tool that allows users to simplify their social networking lives. At the moment Spindex is in its infancy with its preview being limited to those attending the Web 2. Expo. What has been released so far however is promising and should give social networking fans something to look forward to.... Transportation Design - AutoCAD Civil 3D Design Road Projects 75% Faster with Automatic Documentation Updates!

    Read the article

  • Delivery terminology and order of magnitude

    - by Peter Turner
    What is the standard way of describing how software products are released and the proportionate order of magnitude to which the changes relative to the software product are conveyed? Is Release Update Patch Bug Fix redundant? or Is Update Patch too terse? As an end user I'd think that all bug fixes are patches (insofar as they are not 100% new code) and all patches should be updates (insofar as they don't degrade the product) and all updates should be releases (insofar as they are actually released), but this really doesn't help anyone understand why they need to get them. Then, if the person who makes the software change appends "critical" or "zero-day" in the notes, I would be unwise to leave the changes unapplied.

    Read the article

  • Representing heightmaps, on disk and when drawing

    - by gardian06
    This is a conglomeration question when answering please specify which part you are addressing. I am looking at creating a maze type game that utilizes elevation. I have a few features I would like to have, but am unsure as to some of the implementation. I have done work doing fileIO maze generation (using a key to read the file, and then generate the level based on that file), but I am unsure how to think about this with elevation in the mix. I think height maps might be a good approach, but don't know how to represent them effectively. for a height map which is more beneficial XML(containing h[u,v] data and key definition), CSV (item1 is key reference, item2 is elevation), or another approach that I have not thought of yet? When it comes to placing the elevation values themselves what kind of deltah values are appropriate to have it noticeable at about a 60degree angle while not really effecting gravity driven physics (assuming some effect while moving up/down hill)? I am thinking of maybe going to procedural generation at some point, but am wondering if it is practical to have a procedurally generated grid (wall squares possibly same dimensions as the open space squares), or if designing to a thin wall open spaces is better? this decision will effect the amount of work need on the graphics end for uniform vs. irregular walls. EDIT: Game will be a elevation maze shooter. Levels/maps will be mazes with elevation the player has to negotiate. Elevations will have effects on "combat" vision, and movement.

    Read the article

  • UppercuT v1.0 and 1.1&ndash;Linux (Mono), Multi-targeting, SemVer, Nitriq and Obfuscation, oh my!

    - by Robz / Fervent Coder
    Recently UppercuT (UC) quietly released version 1 (in August). I’m pretty happy with where we are, although I think it’s a few months later than I originally planned. I’m glad I held it back, it gave me some more time to think about some things a little more and also the opportunity to receive a patch for running builds with UC on Linux. We also released v1.1 very recently (December). UppercuT v1 Builds On Linux Perhaps the most significant changes to UC going v1 is that it now supports builds on Linux using Mono! This is thanks mostly to Svein Ackenhausen for the patches and working with me on getting it all working while not breaking the windows builds!  This means you can use mono on Windows or Linux. Notice the shell files to execute with Linux that come as part of UC now. Multi-Targeting Perhaps one of the hardest things to do that requires an automated build is multi-targeting. At v1 this is early, and possibly prone to some issues, but available.  We believe in making everything stupid simple, so it’s as simple as adding a comma to the microsoft.framework property. i.e. “net-3.5, net-4.0” to suddenly produce both framework builds. When you build, this is what you get (if you meet each framework’s requirements): At this time you have to let UC override the build location (as it does by default) or this will not work.  Semantic Versioning By now many of you have been using UppercuT for awhile and have watched how we have done versioning. Many of you who use git already know we put the revision hash in the informational/product version as the last octet. At v1, UppercuT has adopted the semantic versioning scheme. What does that mean? This is a short read, but a good one: http://SemVer.org SemVer (Semantic Versioning) is really using versioning what it was meant for. You have three octets. Major.Minor.Patch as in 1.1.0.  UC will use three different versioning concepts, one for the assembly version, one for the file version, and one for the product version. All versions - The first three octects of the version are owned by SemVer. Major.Minor.Patch i.e.: 1.1.0 Assembly Version - The assembly version would much closer follow SemVer. Last digit is always 0. Major.Minor.Patch.0 i.e: 1.1.0.0 File Version - The file version occupies the build number as the last digit. Major.Minor.Patch.Build i.e.: 1.1.0.2650 Product/Informational Version - The last octect of your product/informational version is the source control revision/hash. Major.Minor.Patch.RevisionOrHash i.e. (TFS/SVN): 1.1.0.235 i.e. (Git/HG): 1.1.0.a45ace4346adef0 SemVer is not on by default, the passive versioning scheme is still in effect. Notice that version.use_semanticversioning has been added to the UppercuT.config file (and version.patch in support of the third octet): Gems Support Gems support was added at v1. This will probably be deprecated as some point once there is an announced sunset for Nu v1. Application gems may keep it around since there is no alternative for that yet though (CoApp would be a possible replacement). Nitriq Support Nitriq is a code analysis tool like NDepend. It’s built by Mr. Jon von Gillern. It uses LINQ query language, so you can use a familiar idiom when analyzing your code base. It’s a pretty awesome tool that has a free version for those looking to do code analysis! To use Nitriq with UC, you are going to need the console edition.  To take advantage of Nitriq, you just need to update the location of Nitriq in the config: Then add the nitriq project files at the root of your source. Please refer to the Nitriq documentation on how these are created. UppercuT v1.1 Obfuscation One thing I started looking into was an easy way to obfuscate my code. I came across EazFuscator, which is both free and awesome. Plus the GUI for it is super simple to use. How do you make obfuscation even easier? Make it a convention and a configurable property in the UC config file! And the code gets obfuscated! Closing Definitely get out and look at the new release. It contains lots of chocolaty (sp?) goodness. And remember, the upgrade path is almost as simple as drag and drop!

    Read the article

  • Mouse button and keypress counter for Linux?

    - by rakete
    I would like to have some kind of statistic of my daily mouse/keyboard usage to help me make my keyboard layout a little bit more efficient. There is already an question about how to do this on windows, but I would like to know I anyone is aware if this is possible under linux. Another thing I already found is key-mon, a little program for screencasts that displays your mouse and keyboard presses on the screen, which would help me achieve what I want with a little bit of python coding by myself. But still, if there was an solution already, that would be easier of course. PS: obfuscated link to key-mon because of spam prevention: hxxp://code.google.com/p/key-mon/

    Read the article

  • Internal and external API architecture

    - by Tacomanator
    The company I work for maintains a successful SaaS product that grew "organically" over the years. We are planning to expand the line with a suite of new products that will share data with the existing product. To support this, we are looking to consolidate business logic into a single place: a web service layer. The WS layer will be used by: The web applications A tool to import data A tool to integrate with other client software (not an API per se) We also want to create an API that can be used by our customers that are capable of using it to create their own integrations. We are struggling with the following question: Should the internal API (aka the WS layer) and the external API be one in the same, with security and permission settings to control what can be done by who, or should they be two separate applications where the external API just calls the internal API like any other application? So far in our debate it seems that separating them may be more secure, but will add overhead. What have others done in a similar situation?

    Read the article

  • What if you could work on anything you wanted?

    - by red@work
    This week we've downed our tools and organised ourselves into small project teams or struck out alone. We're working on whatever we like, with whoever we like, wherever we like. We've called it Down Tools week and so far it's a blast. It all started a few months ago with an idea from Neil, our CEO. Neil wanted to capture the excitement, innovation, and productivity of Coding by the Sea and extend this to all Red Gaters working in Product Development. A brainstorm is always a good place to start for an "anything goes" project. Half of Red Gate piled into our largest meeting room (it's pretty big) armed with flip charts, post its and a heightened sense of possibility. An hour or so later our SQL Servery walls were covered in project ideas. So what would you do, if you could work on anything you wanted? Many projects are related to tools we already make, others are for internal product development use and some are, well, just something completely different. Someone suggested we point a web cam at the SQL Servery lunch queue so we can check it before heading to lunch. That one couldn't wait for Down Tools Week. It was up and running within a few days and even better, it captures the table tennis table too. Thursday is the Show and Tell - I am looking forward to seeing what everyone has come up with. Some of the projects will turn into new products or features so this probably isn't the time or place to go into detail of what is being worked on. Rest assured, you'll hear all about it! We're making a video as we go along too which will be up on our website as soon. In the meantime, all meetings are cancelled, we've got plenty of food in and people are being very creative with the £500 expenses budget (Richard, do you really need an iPad?). It's brilliant to see it all coming together from the idea stage to reality. Catch up with our progress by following #downtoolsweek on Twitter. Who knows, maybe a future Red Gate flagship tool is coming to life right now? By the way, it's business as usual for our customer facing and internal operations teams. Hmm, maybe we can all down tools for a week and ask Product Development to hold the fort? Post by: Alice Chapman

    Read the article

  • C# or .Net features to cut off assuming no backward compatibility needed?

    - by Gulshan
    Any product or framework evolves. Mainly it's done to catch up the needs of it's users, leverage new computing powers and simply make it better. Sometimes the primary design goal also changes with the product. C# or .net framework is no exception. As we see, the present day 4th version is very much different comparing with the first one. But thing comes as a barricade to this evolution- backward compatibility. In most of frameworks/products there are features would have been cut off if there was no need to support backward compatibility. According to you, what are these features in C#/.net? Please mention one feature per answer.

    Read the article

  • How does EJIE, Basque Government's IT arm, uses Oracle WebLogic

    - by Ruma Sanyal
    Watch Mike Lehmann, Senior Director of Product Management from Oracle and Oscar Guadilla, Senior Architect from EJIE, Basque Government's IT Company, discuss EJIE's implementation of Oracle WebLogic Server. Hear EJIE's history with Oracle WebLogic Server, how and why they are using it for its web application platform, common services, file services, and intranet and the benefits they are gleaning. In addition, hear how EJIE is using WebLogic JMS for document management common service integration in its Eco-government project. While you are at it, since you are at our youtube channel (youtube.com/oracleweblogic) already, take a look at the various 'how to' videos Jeff West, Steve Button and others from our product management team have published here. Topics such as WebLogic Maven Plugin, TopLink Grid, How to Patch a WebLogic domain and much more are covered. Great way to spend some of your downtime during the holidays! :)   

    Read the article

  • LUKS no longer accepts my my passphrase

    - by Two Spirit
    I created a 4 drive RAID5 setup using mdadm and upgrading from 2TB drives to the new Hitachi 7200RPM 4TB drives. I can initially open my luks partition, but later can no longer access it. I can no longer access my LUKS partition even tho I have the right passphrases. It was working and then at an unknown point in time loose access to LUKS. I've used the same procedures for upgrading from 500G to 1TB to 1.5TB to 2TB. After the first time this happened a week ago, I thought maybe there was some corruption so I added a 2nd Key as a backup. After the second time the LUKS became unaccessible, none of the keys worked. I put LUKS on it using cryptsetup -c aes -s 256 -y luksFormat /dev/md0 # cryptsetup luksOpen /dev/md0 md0_crypt Enter LUKS passphrase: Enter LUKS passphrase: Enter LUKS passphrase: Command failed: No key available with this passphrase. The first time this happened while I was upgrading to 4TB drives, I thought it was a fluke, and ultimately had to recover from backups. I went an used luksAddKey to add a 2nd key as a backup. It happened again and I tried both passphrases, and neither worked. The only thing I'm doing differently this time around is that I've upgraded to 4TB drives which use GPT instead of fdisk. The last time I had to even reboot the box was over 2 years ago. I'm using ubuntu-8.04-server with kernel 2.6.24-29 and upgraded to -2.6.24-31, but that didn't fix the problem.

    Read the article

  • A Hot Topic - Profitability and Cost Management

    - by john.orourke(at)oracle.com
    Maybe it's due to the recent recession, or current economic recovery but a hot topic and area of focus for many organizations these days is profitability and cost management.  For most organizations, aggressive cost-cutting and cost management were critical to remaining profitable while top line revenue was flat or shrinking.  However, now we are seeing many organizations taking a more "surgical" approach to profitability and cost management, by accurately allocating revenue and costs to individual product lines, services, customer segments, locations, channels and other lines of business to understand which ones are truly profitable and which ones are not.  Based on these insights, managers can make more informed decisions about which products or services to invest in or retire, how to price their products or services for different customer segments, and where to focus their marketing and customer service resources. The most common industries where this product, service and customer-focused costing and profitability analysis is being adopted include financial services, consumer packaged goods, retail and manufacturing.  However we are seeing adoption of profitability and cost management applications in other industries and use cases.  Here are a few examples: Telecommunications Industry:  Network Costing and Management to identify the most cost effective and/or profitable network areas, to optimize existing resources, infrastructure and network capacity.  Regulatory Cost Accounting to perform more accurate allocations of revenue and costs across services and customer segments, improve ability to set billing rates for future periods, for various products and customer segments and more easily develop analysis needed for rate case proposals. Healthcare Insurance:  Visually, justifiable Medical Loss Ratio results, better knowledge of the cost to service healthcare plans and members, accurate understanding of member segment and plan profitability, improved marketing programs through better member segmentation. Public Sector:  Statutory / Regulatory Compliance:  A variety of statutory and regulatory documents state explicitly or implicitly that the use of government resources must be properly tracked and tied to performance goals.  Managerial costing methods implemented through Cost Management applications provide unparalleled visibility into costs and shared services usage throughout a Public Sector agency. Funding Support:  Regulations require public sector funding requests to be evaluated based upon the ability to achieve performance goals against the associated cost.   Improved visibility and understanding of costs of different programs/services means that organizations can demonstrably monitor performance and the associated resource costs improve the chances of having their funding requests granted. Profitability and Cost Management is one of the fastest-growing solution areas in Oracle's Enterprise Performance Management product line and we are seeing a growing number of customer successes across geographies and industries.  Listed below are just a few examples.  Here's a link to the replay from a recent webcast on this topic which featured Schroders Plc, a UK-based Financial Services company: http://www.oracle.com/go/?&Src=7011668&Act=168&pcode=WWMK10037859MPP043 Here's a link to a case study on Shenhua Guohua Power in China: http://www.oracle.com/us/corporate/customers/shenhua-snapshot-159574.pdf Here's a link to information on Oracle's web site about our profitability and cost management solutions: http://www.oracle.com/us/solutions/ent-performance-bi/performance-management/profitability-cost-mgmt/index.html

    Read the article

  • Consumer Electronics Show (CES) Summit:Best Practices in Transforming Channels and Partnerships

    - by charles.knapp
    Expanding consumer demand is driving the entire high technology industry, accompanied by product lifecycles as short as a few months, continued pricing and promotion pressures, and increased globalization. Unifying global channel management, operations, and execution flow will increase efficiency and growth. IT can help, but one must think beyond generic ERP and CRM. Please join Oracle and IBM at the Bellagio Hotel in Las Vegas, Wednesday January 5, 1-7 pm. Learn from IBM, VTech, Plantronics, Cisco, Symantec and Oracle High Tech Product Strategy how to improve:Channel sales, marketing, and operations management - enhance NPI, sales, forecasts, training, promotion planning, execution and settlement Winning the deal - determining the right price for the right deal for the "perfect quote", capturing the order and order management Collaborative and rapid supply chain planning - improve agility, inventory turns, and profits Register now for this FREE event. We hope you'll join us for our Oracle High Technology CES Summit and networking reception with your peers.

    Read the article

  • System does not detect USB pendrives

    - by cshubhamrao
    This USB thing is driving me crazy. 2 problems in time span of 3 hours. Ok I was already trying to cope up with "wrong fs type, bad option, bad superblock" error while mounting FAT Drives when to my amazement I discovered that none of the USB Storage devices showed up in the system Useful outputs: - tail /var/log/syslog: root@shubham-pc:~# tail /var/log/syslog Nov 7 21:41:47 shubham-pc colord: device removed: sysfs-HP-v250w Nov 7 21:41:51 shubham-pc kernel: [ 3441.529542] usb 1-1: USB disconnect, device number 11 Nov 7 21:41:53 shubham-pc kernel: [ 3443.820029] usb 1-2: new high-speed USB device number 14 using ehci-pci Nov 7 21:41:54 shubham-pc kernel: [ 3443.952897] usb 1-2: New USB device found, idVendor=0781, idProduct=5530 Nov 7 21:41:54 shubham-pc kernel: [ 3443.952905] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 Nov 7 21:41:54 shubham-pc kernel: [ 3443.952909] usb 1-2: Product: Cruzer Nov 7 21:41:54 shubham-pc kernel: [ 3443.952913] usb 1-2: Manufacturer: SanDisk Nov 7 21:41:54 shubham-pc kernel: [ 3443.952917] usb 1-2: SerialNumber: 20060876420EC6016847 Nov 7 21:41:54 shubham-pc mtp-probe: checking bus 1, device 14: "/sys/devices/pci0000:00/0000:00:1d.7/usb1/1-2" Nov 7 21:41:54 shubham-pc mtp-probe: bus: 1, device: 14 was not an MTP device

    Read the article

  • Telesharp – An Application Repository for .NET applications

    - by cibrax
    A year ago, we released SO-Aware as our first product in Tellago Studios. SO-Aware represented a new way to manage web services and all the related artifacts like configuration, tests or monitoring data in the Microsoft stack. It was based on the idea of using a lightweight SOA governance approach with a central repository exposed through RESTful services. At that point, we thought the same idea could be extended to enterprise applications in general by providing a generic repository for many of the runtime or design time artifacts generated during the development like configuration, application description or topology (a high level view of the components that made up a system), logging information or binaries. It took us several months to give a form to that idea and implement it as a product, but it is finally here and I am very proud to announce the release today under the name of “TeleSharp”. Telesharp provides in a nutshell the following features, 1. Configure your application topology in a central repository. Application topology in this context means that you can decompose your application and describe it in terms of components and how they interact each other. For example, you can tell that the CRM system is made up of a couple of WCF services and a ASP.NET MVC front end. 2. Centralize configuration for your applications and components.  You can import existing .NET configuration sections into the repository and associate them to the different components. In addition, environment overrides are supported for the configuration sections. We provide tooling and extensions in Visual Studio for managing all the configuration, and a set of powershell commands for automating the configuration deployment. 3. Browse all the assemblies and types remotely in your application servers in a web browser using an interface similar to any of the existing .NET reflection tools. You can easily determine this way whether the server is running the correct version of your applications. 4. Centralize logging and exception management into the repository. You get different reports and a pivot viewer experience for browsing all the different logging information generated by your applications. In addition, TeleSharp provides different providers for pushing the logging information to the central repository using well-known frameworks like ELMAH, Log4Net, EntLib or even Windows ETW.  The central repository itself is implemented as a set of OData services that any application can easily consume using regular Http. You can read more details in this introductory post If you think this product can be a good fit in your organization, you can request a trial version in our Tellago Studios website.

    Read the article

  • What is the difference between development and R&D?

    - by MainMa
    I was asked by a colleague to explain clearly the difference between ordinary development and research and development (R&D) and was unable to do it. After reading Wikipedia, I still don't have the precise answer. According to Wikipedia (slightly modified): There are two primary models: In one model, the primary function is to develop new products; in the other model, the primary function is to discover and create new knowledge about scientific and technological topics for the purpose of uncovering and enabling development of valuable new products, processes, and services. The first model is confusing. Does it mean that development (not R&D) consists exclusively in adding new features to a product, solving bugs and doing maintenance? What if something which was previously developed as a new feature becomes a separate product? The second model is less confusing, but still, how to qualify whether something is new knowledge or existent knowledge which is just rediscovered? Later, Wikipedia adds that ordinary development is different from R&D because of its: nearly immediate profit or immediate improvement. It's still not clear enough. How to qualify "nearly immediate profit"? What if a task has an immediate profit but requires heavy research? Or if it is basic but has uncertain profit, like the enforcement of a common style over the codebase? For example, does it belong to development or R&D to: Develop an engine which abstracts the access to the database, simplifying and shortening enormously the code of other applications (existent or ones which will be written in future) which should access to the database? Establish a new service-oriented architecture for the entire organization of company resources, in order to move from a bunch of separate and autonomous applications to a set of well-organized, interconnected web services, like what is used by Amazon? Design a new communication protocol to allow faster replication of data between two data centers of the company? Conceive a new type of software testing while working on a specific product, knowing that this type of testing will improve/simplify the testing process? Prove that Functional programming is more appropriate than OOP for a specific application, based on evidence, logic and previous experience? Enhance the existent application by adding gestures on tactile screens, after doing studies and testing that shows that those gestures improve the productivity of the users by a ratio of at least 1.4 for a precise set of tasks? Find a way to strongly enhance the Power usage effectiveness (PUE) of a data center? Create a Domain-Specific Language (DSL)? In short, how could I determine whether I'm doing R&D while working on something?

    Read the article

  • E-commerce for custom orders/customer image upload

    - by ansarob
    We have a client that needs an e-commerce site set up pretty quickly. As I have no experience with e-commerce, I am looking for some guidance. Basically, the two big features we need are: Ability for customer to add info about order (example: the name the customer wants to be put on the customizable product they ordered) Ability for customer to upload photo of product to be customized I hope this makes sense. Right now I am really looking into Shopify, but I can't tell if it does everything we need. I know you can add order notes when checking out, but not sure about image upload (maybe it can be added as an app through the API?).

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >