Search Results

Search found 70675 results on 2827 pages for 'master data management'.

Page 321/2827 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • Media meta file system for windows

    - by Chris Marisic
    I have a large assortment of media that is currently arranged using folders. As the library has grown I've started to notice that folders aren't the best at conveying meaning. Also as the number of folders serving as categories/tags has grown has lead to data duplication for not realizing it was already filed under a different tag. As I started to think about this I realized tag cloud visualization would be tremendously powerful and figured there has to be something like this out there.

    Read the article

  • Data Sources (ODBC) hangs when trying to create a new database connection

    - by FredrikD
    When I try to create a new database connection, the Data Sources (ODBC) programs hangs or takes a very long time to find the list of available SQL Servers. This only happens when there are other computers on the network, when my machine (a standard Windows 7 laptop) is alone, it works just fine. My question is: What should I look for in terms of SQL server or ODBC configurations that will take away this random behaviour?

    Read the article

  • gunzip: invalid compressed data--format violated

    - by Arunjith
    Problem definition: I transferred a tar.gz file from a Linux machine to a Windows partition.The Windows partition has mounted with the Linux server as cifs. OS : Red Hat Enterprise Linux Server release 5 Symptom: After the copy process is successful, doing an integrity check with gunzip -t and the process get the following error: gunzip -t Backup-28--Jun--2011--Tuesday.tar.gz gunzip: Backup-28--Jun--2011--Tuesday.tar.gz: invalid compressed data--format violated And further tried to untar (tar -xvzf) and the process as well is failed.

    Read the article

  • Preventing users from deleting SQL data

    - by me2011
    We just purchased a program that requires the users to have an account in the MS SQL server, with read/write access to the program's database. My concern is that since these users will now have write access to the database, they could directly connect to the SQL server outside of the program's client and then mess with the data directly in the tables. Is there anyway I can prevent access to the database while still allowing access via the client program?

    Read the article

  • Can't recover hard drive

    - by BreezyChick89
    My drive got corrupt after a thunderstorm. It used to be 1 partition of 2.5tb but now it shows 2 partitions. It's weird because 300gig free space is about how much it had before corrupting, but it was part of the first partition. I tried $ sudo resize2fs -f /dev/sdb1 Resizing the filesystem on /dev/sdb1 to 536870911 (4k) blocks. resize2fs: Can't read an block bitmap while trying to resize /dev/sdb1 Please run 'e2fsck -fy /dev/sdb1' to fix the filesystem after the aborted resize operation. sudo e2fsck -f /dev/sdb1 e2fsck 1.42 (29-Nov-2011) The filesystem size (according to the superblock) is 610471680 blocks The physical size of the device is 536870911 blocks Either the superblock or the partition table is likely to be corrupt! Abort? n .... Error reading block 537395215 (Invalid argument) while reading inode and block bitmaps. Ignore error<y>? yes Force rewrite<y>? yes Error writing block 537395215 (Invalid argument) while reading inode and block bitmaps. Ignore error<y>? yes ... A lot of these. I can't use e2fsck -y because the first question aborts if I say "y". If I put a weight on the 'y' key it fails because none of the errors were really fixed. I asked this question before and tried using gparted but gparted fails because the first thing it does is: e2fsck -f -y -v /dev/sdb1 giving the same error. The disk status says healthy. There are no bad blocks. This is very frustrating because I can see the data in testdisk and it looks like it's all there. I already bought another 2.5tb drive and made a clone using dd. The next step if I can't fix this is to wipe that drive and just move the data with testdisk, but it seems certain folders will copy infinitely until the drive is full because of symlinks or errors so it's also a difficult option. sudo fdisk -l Disk /dev/sdb: 2500.5 GB, 2500495958016 bytes 255 heads, 63 sectors/track, 304001 cylinders, total 4883781168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0005da5e Device Boot Start End Blocks Id System /dev/sdb1 * 2048 4294969342 2147483647+ 83 Linux sudo badblocks -b 4096 -n -o badfile /dev/sdb 610471680 536870911 badfile is empty I also tried changing the superblock with "fsck -b" but all of them are the same.

    Read the article

  • MODx not display correct image path

    - by Austin
    In the MODx WYSIWYG whenever I click the Image icon to insert an image, followed by browse image it generates the wrong path: /data/12/1/111/99/1111262/user/1169144/htdocs/images/image.jpg instead of assets/images/image.jpg I have checked my Resource URL and Resource Path and they both look correct. Has anyone ever experienced MODx rewriting your paths to the server vs what it should be?

    Read the article

  • Mac: Resize windows partition w/o destroying data?

    - by jbehren
    Is there a method/utility to actively resize the partitions on a dual-boot macbook air, without destroying the contents? I made the Windows Partition too small initially, and all the places I've looked have stated that resizing now using bootcamp will destroy all data on the Win7 Partition. I would prefer free, but I'm open to a reasonably priced utility that can grow the Win7 partition into the available space (I can use bootcamp to shrink the OSX partition without any problems).

    Read the article

  • How to Export/Transfer DHCP data ?

    - by sreevatsa
    We have a very old server HP ML110 its giving hardware ( Power )trouble and we are hosting DHCP services on this on windows 2000 . Now i would like to transfer all the DHCP data ( it has reserved IP ) from this old server to a new server which is win2003 . How do i do ?

    Read the article

  • Moving Data from One Column into Six Columns

    - by Alex Rudd
    I have an Excel sheet that has six columns that are currently all combined into one column. I need to separate them out but the issue is the first column is words that sometimes are one word and sometimes two. Here is an example: Twin 70 442 186 310 221 Twin Futon 70 389 160 272 195 XL twin 70 463 196 324 231 XL Twin Futon 70 418 174 293 209 Double 100 590 245 413 295 How can I separate these data sets while keeping the words all in the same columns?

    Read the article

  • How to stay productive? What time management software is available?

    - by andrewsomething
    So since I started using askubuntu.com I've spent entirely too much time here answering other people's questions. Now maybe someone could help me with that by answering this one. I'm looking for time management software for Ubuntu. There are a number of these programs floating around for Windows. RescueTime is one that is very popular. The key features that I'd like to see in a linux app that RescueTime has are: Automatically records what application you are using, including what websites you visit. Reports and graphs on your time usage. Notifications for when you have spent too much time on "distractions." While RescueTime doesn't officially support linux, there is an open source RescueTime Linux Uploader. Unfortunately, it seems to only support Firefox and Epiphany for website tracking. I'm a Chromium user. The other major drawback to RescueTime is that it is a web service. I'd much rather not upload detailed information about how I spend my time to some third party. Google already knows too much about me as it is. Project Hamster, a GNOME time management app, comes so close. Sadly, it does not automatically track what you are doing. If I had enough discipline to manually report to an applet what I was up to, I doubt I'd need this. (How cool would it be if they provided some Zeitgeist integration to handle that part?)

    Read the article

  • Should I stay in my degree or take an opportunity for management experience?

    - by Adam
    I've read a couple other post along these lines and they've been helpful but I'm wondering if my case is any different. I've been working towards my CS degree while working part time in a programming job. I'm now about two years away from getting my degree and was just offered a management position at my job. This would mean that I have to work full-time at my job and I can't really work towards my degree anymore in person. My school doesn't really offer CS classes after hours nor online. It seems that getting a degree is very important from the other post that I read. Does having management experience trump that? I'm currently leaning towards taking the job and finding some sort of online degree. Also my school only offers a business degree online, could I just get this in place. Does the type of degree really matter? For some jobs it's not the type of degree just that you have one, is there any merit for this in the programming industry? Thanks :)

    Read the article

  • What do you need to implement to provide a Content Set for an NSArrayController?

    - by whuuh
    Heys, I am writing something in Xcode. I use Core Data for persistency and link the view and the model together with Cocoa Bindings; pretty much your ordinary Core Data application. I have an array controller (NSArrayController) in my Xib. This has its managedObjectContext bound to the AppDelegate, as is convention, and tracks an entity. So far so good. Now, the "Content Set" biding of this NSArrayController limits its content set (as you'd expect), by a keyPath from the selection in another NSArrayController (otherAc.selection.detailsOfMaster). This is the usual way to implement a Master-Detail relationship. I want to variably change the key path at runtime, using other controls. This way, I sould return a content set that includes several other content sets, which is all advanced and beyond Interface Builder. To achieve this, I think I should bind the Content Set to my AppDelegate instead. I have tried to do this, but don't know what methods to implement. If I just create the KVC methods (objectSet, setObjectSet), then I can provide a Content Set for the Array Controller in the contentSet method. However, I don't think I'm binding this properly, because it doesn't "refresh". I'm new to binding; what do I need to implement to properly update the Content Set when other things, like the selection in the master NSArrayController, changes?

    Read the article

  • Multi-user job/task tracking/queue software

    - by Bmsgaffer86
    Background: I test and repair electronic products with a team. There are many 'jobs' going through my lab at any point in time. It is getting difficult to track whats coming in and going out because I don't do every test or repair myself. Target: User can enter a job when they drop it off in my lab, and it will appear on the master list or queue. Needs to have priorities and due dates that can be adjusted by users. Ideally this would be web based and open-source, but I am flexible. Dream: A large monitor displaying a list of jobs in the master queue with details. This is very optional though, and would be in the best case scenario. I have done MANY hours of Google-ing and I am not sure if I have been using the right terminology, but I have not found anything that is simple enough to stand alone yet complex enough to be multi-user based. I am mildly proficient in VB, and have the drive to piece anything together that I have to. I am open to ANY help or suggestions.

    Read the article

  • Best (in your opinion) GIT workflow for case when releases are done on demand (in most cases 1-2 tickets at once)

    - by Robert
    I'm rather a Git newbie and I'm looking for your advice. In the company I work for we have a "workflow" where we have a single Git repo for our project with 2 branches: master and prod. All devs work on the master branch. If a ticket is done (from the dev perspective), we push to the repo. If all tests are passed, we make a release. The issue is that in most cases, the request from business guys sounds like: "please release ticket A or A && B". In most cases, I end up doing something like git checkout prod git cherry-pick --no-commit commit_hash git commit -m "blah blah to prod" -a As you can see this is not a perfect solution, and I'm under a huge impression this is a perfect way to nowhere especially when change A depends on changes B and C. Do you have any suggestions how to handle releases on demand if more devs works on the same branch and the flow looks like I described above? All suggestions are welcome. I cannot change business processes and it will have to stay as it is - unfortunately.

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • What’s New for Oracle Commerce? Executive QA with John Andrews, VP Product Management, Oracle Commerce

    - by Katrina Gosek
    Oracle Commerce was for the fifth time positioned as a leader by Gartner in the Magic Quadrant for E-Commerce. This inspired me to sit down with Oracle Commerce VP of Product Management, John Andrews to get his perspective on what continues to make Oracle a leader in the industry and what’s new for Oracle Commerce in 2013. Q: Why do you believe Oracle Commerce continues to be a leader in the industry? John: Oracle has a great acquisition strategy – it brings best-of-breed technologies into the product fold and then continues to grow and innovate them. This is particularly true with products unified into the Oracle Commerce brand. Oracle acquired ATG in late 2010 – and then Endeca in late 2011. This means that under the hood of Oracle Commerce you have market-leading technologies for cross-channel commerce and customer experience, both designed and developed in direct response to the unique challenges online businesses face. And we continue to innovate on capabilities core to what our customers need to be successful – contextual and personalized experience delivery, merchant-inspired tools, and architecture for performance and scalability. Q: It’s not a slow moving industry. What are you doing to keep the pace of innovation at Oracle Commerce? John: Oracle owes our customers the most innovative commerce capabilities. By unifying the core components of ATG and Endeca we are delivering on this promise. Oracle Commerce is continuing to innovate and redefine how commerce is done and in a way that drive business results and keeps customers coming back for experiences tailored just for them. Our January and May 2013 releases not only marked the seventh significant releases for the solution since the acquisitions of ATG and Endeca, we also continue to demonstrate rapid and significant progress on the unification of commerce and customer experience capabilities of the two commerce technologies. Q: Can you tell us what was notable about these latest releases under the Oracle Commerce umbrella? John: Specifically, our latest product innovations give businesses selling online the ability to get to market faster with more personalized commerce experiences in the following ways: Mobile: the latest Commerce Reference Application in this release offers a wider range of examples for online businesses to leverage for iOS development and specifically new iPad reference capabilities. This release marks the first release of the iOS Universal application that serves both the iPhone and iPad devices from a single download or binary. Business users can now drive page content management and layout of search results and category pages, as well as create additional storefront elements such as categories, facets / dimensions, and breadcrumbs through Experience Manager tools. Cross-Channel Commerce: key commerce platform capabilities have been added to support cross-channel commerce, including an expanded inventory model to maintain inventory for stores, pickup in stores and Web-based returns. Online businesses with in-store operations can now offer advanced shipping options on the web and make returns and exchange logic easily available on the web. Multi-Site Capabilities: significant enhancements to the Commerce Platform multi-site architecture that allows business users to quickly launch and manage multiple sites on the same cluster and share data, carts, and other components. First introduced in 2010, with this latest release business users can now partition or share customer profiles, control users’ site-based access, and manage personalization assets using site groups. Internationalization: continued language support and enhancements for business user tools as well and search and navigation. Guided Search now supports 35 total languages with 11 new languages (including Danish, Arabic, Norwegian, Serbian Cyrillic) added in this release. Commerce Platform tools now include localized support for 17 locales with 4 new languages (Danish, Portuguese (European), Finnish, and Thai). No development or customization is required in order for business users to use the applications in any of these supported languages. Business Tool Experience: valuable new Commerce Merchandising features include a new workflow for making emergency changes quickly and increased visibility into promotions rules and qualifications in preview mode. Oracle Commerce business tools continue to become more and more feature rich to provide intuitive, easy- to-use (yet powerful) capabilities to allow business users to manage content and the shopping experience. Commerce & Experience Unification: demonstrable unification of commerce and customer experience capabilities include – productized cartridges that provide supported integration between the Commerce Platform and Experience Management tools, cross-channel returns, Oracle Service Cloud integration, and integrated iPad application. The mission guiding our product development is to deliver differentiated, personalized user experiences across any device in a contextual manner – and to give the business the best tools to tune and optimize those user experiences to meet their business objectives. We also need to do this in a way that makes it operationally efficient for the business, keeping the overall total cost of ownership low – yet also allows the business to expand, whether it be to new business models, geographies or brands. To learn more about the latest Oracle Commerce releases and mission, visit the links below: • Hear more from John about the Oracle Commerce mission • Hear from Oracle Commerce customers • Documentation on the new releases • Listen to the Oracle ATG Commerce 10.2 Webcast • Listen to the Oracle Endeca Commerce 3.1.2 Webcast

    Read the article

  • Markus Zirn, "Big Data with CEP and SOA" @ SOA, Cloud &amp; Service Technology Symposium 2012

    - by JuergenKress
    ORACLE PROMOTIONAL DISCOUNT FOR EXCLUSIVE ORACLE DISCOUNT, ENTER PROMO CODE: DJMXZ370 Early-Bird Registration is Now Open with Special Pricing! Register before July 1, 2012 to qualify for discounts. Visit the Registration page for details. The International SOA, Cloud + Service Technology Symposium is a yearly event that features the top experts and authors from around the world, providing a series of keynotes, talks, demonstrations, and panels, as well as training and certification workshops - all dedicated to empowering IT professionals to realize modern service technologies and practices in the real world. Click here for a two-page printable conference overview (PDF). Big Data with CEP and SOA - September 25, 2012 - 14:15 Speaker: Markus Zirn, Oracle and Baz Kuthi, Avocent The "Big Data" trend is driving new kinds of IT projects that process machine-generated data. Such projects store and mine using Hadoop/ Map Reduce, but they also analyze streaming data via event-driven patterns, which can be called "Fast Data" complementary to "Big Data". This session highlights how "Big Data" and "Fast Data" design patterns can be combined with SOA design principles into modern, event-driven architectures. We will describe specific architectures that combines CEP, Distributed Caching, Event-driven Network, SOA Composites, Application Development Framework, as well as Hadoop. Architecture patterns include pre-processing and filtering event streams as close as possible to the event source, in memory master data for event pattern matching, event-driven user interfaces as well as distributed event processing. Focus is on how "Fast Data" requirements are elegantly integrated into a traditional SOA architecture. Markus Zirn is Vice President of Product Management covering Oracle SOA Suite, SOA Governance, Application Integration Architecture, BPM, BPM Solutions, Complex Event Processing and UPK, an end user learning solution. He is the author of “The BPEL Cookbook” (rated best book on Services Oriented Architecture in 2007) as well as “Fusion Middleware Patterns”. Previously, he was a management consultant with Booz Allen & Hamilton’s High Tech practice in Duesseldorf as well as San Francisco and Vice President of Product Marketing at QUIQ. Mr. Zirn holds a Masters of Electrical Engineering from the University of Karlsruhe and is an alumnus of the Tripartite program, a joint European degree from the University of Karlsruhe, Germany, the University of Southampton, UK, and ESIEE, France. KEYNOTES & SPEAKERS More than 80 international subject matter experts will be speaking at the Symposium. Below are confirmed keynotes and speakers so far. Over 50% of the agenda has not yet been finalized. Many more speakers to come. View the partial program calendars on the Conference Agenda page. CONFERENCE THEMES & TRACKS Cloud Computing Architecture & Patterns New SOA & Service-Orientation Practices & Models Emerging Service Technology Innovation Service Modeling & Analysis Techniques Service Infrastructure & Virtualization Cloud-based Enterprise Architecture Business Planning for Cloud Computing Projects Real World Case Studies Semantic Web Technologies (with & without the Cloud) Governance Frameworks for SOA and/or Cloud Computing Projects Service Engineering & Service Programming Techniques Interactive Services & the Human Factor New REST & Web Services Tools & Techniques Oracle Specialized SOA & BPM Partners Oracle Specialized partners have proven their skills by certifications and customer references. To find a local Specialized partner please visit http://solutions.oracle.com SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Markus Zirn,SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Oracle Enterprise Manager 11g is Here!

    - by chung.wu
    We hope that you enjoyed the launch event. If you missed it, you may still watch it via our on demand webcast, which is being produced and will be posted very shortly. 11gR1 is a major release of Oracle Enterprise Manager, and as one would expect from a big release, there are many new capabilities that appeal to a broad set of audience. Before going into the laundry list of new features, let's talk about the key themes for this release to put things in perspective. First, this release is about Business Driven Application Management. The traditional paradigm of component centric systems management simply cannot satisfy the management needs of modern distributed applications, as they do not provide adequate visibility of whether these applications are truly meeting the service level expectations of the business users. Business Driven Application Management helps IT manage applications according to the needs of the business users so that valuable IT resources can be better focused to help deliver better business results. To support Business Driven Application Management, 11gR1 builds on the work that we started in 10g to provide better support for user experience management. This capability helps IT better understand how users use applications and the experience that the applications provide so that IT can take actions to help end users get their work done more effectively. In addition, this release also delivers improved business transaction management capabilities to make it faster and easier to understand and troubleshoot transaction problems that impact end user experience. Second, this release includes strengthened Integrated Application-to-Disk Management. Every component of an application environment, from the application logic to the application server, to database, host machines and storage devices, etc... can affect end user experience. After user experience improvement needs are identified, IT needs tools that can be used do deep dive diagnostics for each of the application environment component, analyze configurations and deploy changes. Enterprise Manager 11gR1 extends coverage of key application environment components to include full support for Oracle Database 11gR2, Exadata V2, and Fusion Middleware 11g. For composite and Java application management, two key pieces of technologies, JVM Diagnostic and Composite Application Monitoring and Modeler, are now fully integrated into Enterprise Manager so there is no need to install and maintain separate tools. In addition, we have delivered the first set of integration between Enterprise Manager Grid Control and Enterprise Manager Ops Center so that hardware level events can be centrally monitored via Grid Control. Finally, this release delivers Integrated Systems Management and Support for customers of Oracle technologies. Traditionally, systems management tools and tech support were separate silos. When problems occur, administrators used internally deployed tools to try to solve the problems themselves. If they couldn't fix the problems, then they would use some sort of support website to get help from the vendor's support staff. Oracle Enterprise Manager 11g integrates problem diagnostic and remediation workflow. Administrators can use Oracle Enterprise Manager's various diagnostic tools to begin the troubleshooting process. They can also use the integrated access to My Oracle Support to look up solutions and download software patches. If further help is needed, administrators can open service requests from right within Oracle Enterprise Manager and track status update. Oracle's support staff, using Enterprise Manager's configuration management capabilities, can collect important configuration information about customer environments in order to expedite problem resolution. This tight integration between Oracle Enterprise Manager and My Oracle Support helps Oracle customers achieve a Superior Ownership Experience for their Oracle products. So there you have it. This is a brief 50,000 feet overview of Oracle Enterprise Manager 11g. We know you are hungry for the details. We are going to write about it in the coming days and weeks. For those of you that absolutely can't wait to find out more, you may download our software to try it out today. In fact, for the first time ever, the initial release of Oracle Enterprise Manager is available for both 32 and 64 bit Linux. Additional O/S ports will arrive in the coming weeks. Please stay tuned on the Oracle Enterprise Manager blog for additional updates.

    Read the article

  • How can I recover an ext4 filesystem corrupted after a fsck?

    - by Regan
    I have an ext4 filesystem on luks over software raid5. The filesystem was operating "just fine" for several years when I was beginning to run out of space. I had a 9T volume on 6x2T drives. I began upgrading to 3T drives by doing the mdadm fail, remove, add, rebuild, repeat process until I had a larger array. I then grew the luks container, and then when I unmounted and tried to resize2fs I was given the message the filesystem was dirty and needed e2fsck. Without thinking I just did e2fsck -y /dev/mapper/candybox and it began spewing all kinds of inode being removed type messages (can't remember exactly) I killed e2fsck and tried to remount the filesystem to backup data I was concerned about. When trying to mount at this point I get: # mount /dev/mapper/candybox /candybox mount: wrong fs type, bad option, bad superblock on /dev/mapper/candybox, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Looking back at my older logs I noticed the filesystem was giving this error each time the machine booted: kernel: [79137.275531] EXT4-fs (dm-2): warning: mounting fs with errors, running e2fsck is recommended So shame on me for not paying attention :( I then tried to mount using every backup superblock (one after another) and each attempt left this in my log: EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 0 failed (26534!=65440) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 1 failed (38021!=36729) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 2 failed (18336!=39845) ... EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 11911 failed (28743!=44098) BUG: soft lockup - CPU#0 stuck for 23s! [mount:2939] Attempts to restart e2fsck results in: # e2fsck /dev/mapper/candybox e2fsck 1.41.14 (22-Dec-2010) e2fsck: Group descriptors look bad... trying backup blocks... candy: recovering journal e2fsck: unable to set superblock flags on candy At this point, I decided it best to order some more drives and make an image using ddrescue Now two weeks later I have an image of the luks partition in a .img file. # ls -lh total 14T -rw-r--r-- 1 root root 14T Oct 25 01:57 candybox.img -rw-r--r-- 1 root root 271 Oct 20 14:32 candybox.logfile After numerous attempts using everything I could find online I could not coerce e2fsck to do anything on the image, so I used mkfs.ext4 -L candy candybox.img -m 0 -S and I was able to mount the dirty filesystem readonly without the journal and recover 960G of data. It gave all kinds of errors of various directories not existing and so forth but I was able to get some stuff. Which gave me some hope! I then ran e2fsck again and it had to recreate the root inode and gave a massive list of correcting group counts, I accepted the root inode creation and said no to everything else, leaving a completely empty filesystem. Re-ran again and said yes to all questions with the same result but now a "clean" but empty filesystem. extundelete gives me 0 recoverable inodes found. And now I'm stuck again, I can't come up with any other methods other than dropping to something like photorec which will give me an absolute mess with how large the filesystem was. I'm willing to re-copy the image from the original array and start over, if I can get any suggestions or ideas on a way to get more of my files back. I wish I could give more detailed logs of the commands that have run, but the output is long scrolled passed except for what gets logged to syslog and my memory is not as detailed due to the timeframe this has occurred over. Any help is greatly appreciated!

    Read the article

  • haml with rails3 (git master) and devise: form_for syntax change breaks haml -- suggestions?

    - by z3cko
    i am trying to get haml working with a rails3 project; since i am quite far in the modeling i wanted to go to the haml views now -- seems that the current haml (git master) does not work together with the current rails3 git master because of some syntax changes in rails3 form_for does anyone have more information on the syntax changes? is there a temporary workaround to use haml with rails3? (i am on a deadline) :( see also: http://j.mp/9EYraQ thanks!

    Read the article

  • How to get MySite navigation bar to show up on a sub-site that is also using the mysite.master page

    - by Terry Barrett
    Hi, We have a created a subsite to the SharePoint 2010 Mysite (used blank site) called "home". This site uses the mysite.master master page. The problem is the top link (navigation) bar from MySite is not showing up on the subsite... Any ideas? If I open the home site in Share Point Designer - the navigation bar does show up... But not in the browser... Any help would be greatly appreciated! Terry

    Read the article

  • Overly accessible and incredibly resource hungry relationships between business objects. How can I f

    - by Mike
    Hi, Firstly, This might seem like a long question. I don't think it is... The code is just an overview of what im currently doing. It doesn't feel right, so I am looking for constructive criticism and warnings for pitfalls and suggestions of what I can do. I have a database with business objects. I need to access properties of parent objects. I need to maintain some sort of state through business objects. If you look at the classes, I don't think that the access modifiers are right. I don't think its structured very well. Most of the relationships are modelled with public properties. SubAccount.Account.User.ID <-- all of those are public.. Is there a better way to model a relationship between classes than this so its not so "public"? The other part of this question is about resources: If I was to make a User.GetUserList() function that returns a List, and I had 9000 users, when I call the GetUsers method, it will make 9000 User objects and inside that it will make 9000 new AccountCollection objects. What can I do to make this project not so resource hungry? Please find the code below and rip it to shreds. public class User { public string ID {get;set;} public string FirstName {get; set;} public string LastName {get; set;} public string PhoneNo {get; set;} public AccountCollection accounts {get; set;} public User { accounts = new AccountCollection(this); } public static List<Users> GetUsers() { return Data.GetUsers(); } } public AccountCollection : IEnumerable<Account> { private User user; public AccountCollection(User user) { this.user = user; } public IEnumerable<Account> GetEnumerator() { return Data.GetAccounts(user); } } public class Account { public User User {get; set;} //This is public so that the subaccount can access its Account's User's ID public int ID; public string Name; public Account(User user) { this.user = user; } } public SubAccountCollection : IEnumerable<SubAccount> { public Account account {get; set;} public SubAccountCollection(Account account) { this.account = account; } public IEnumerable<SubAccount> GetEnumerator() { return Data.GetSubAccounts(account); } } public class SubAccount { public Account account {get; set;} //this is public so that my Data class can access the account, to get the account's user's ID. public SubAccount(Account account) { this.account = account; } public Report GenerateReport() { Data.GetReport(this); } } public static class Data { public static List<Account> GetSubAccounts(Account account) { using (var dc = new databaseDataContext()) { List<SubAccount> query = (from a in dc.Accounts where a.UserID == account.User.ID //this is getting the account's user's ID select new SubAccount(account) { ID = a.ID, Name = a.Name, }).ToList(); } } public static List<Account> GetAccounts(User user) { using (var dc = new databaseDataContext()) { List<Account> query = (from a in dc.Accounts where a.UserID == User.ID //this is getting the user's ID select new Account(user) { ID = a.ID, Name = a.Name, }).ToList(); } } public static Report GetReport(SubAccount subAccount) { Report report = new Report(); //database access code here //need to get the user id of the subaccount's account for data querying. //i've got the subaccount, but how should i get the user id. //i would imagine something like this: int accountID = subAccount.Account.User.ID; //but this would require the subaccount's Account property to be public. //i do not want this to be accessible from my other project (UI). //reading up on internal seems to do the trick, but within my code it still feels //public. I could restrict the property to read, and only private set. return report; } public static List<User> GetUsers() { using (var dc = new databaseDataContext()) { var query = (from u in dc.Users select new User { ID = u.ID, FirstName = u.FirstName, LastName = u.LastName, PhoneNo = u.PhoneNo }).ToList(); return query; } } }

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >