Search Results

Search found 46908 results on 1877 pages for 'managing files and folder'.

Page 19/1877 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Managing Operational Risk of Financial Services Processes – part 1/ 2

    - by Sanjeevio
    Financial institutions view compliance as a regulatory burden that incurs a high initial capital outlay and recurring costs. By its very nature regulation takes a prescriptive, common-for-all, approach to managing financial and non-financial risk. Needless to say, no longer does mere compliance with regulation will lead to sustainable differentiation.  Genuine competitive advantage will stem from being able to cope with innovation demands of the present economic environment while meeting compliance goals with regulatory mandates in a faster and cost-efficient manner. Let’s first take a look at the key factors that are limiting the pursuit of the above goal. Regulatory requirements are growing, driven in-part by revisions to existing mandates in line with cross-border, pan-geographic, nature of financial value chains today and more so by frequent systemic failures that have destabilized the financial markets and the global economy over the last decade.  In addition to the increase in regulation, financial institutions are faced with pressures of regulatory overlap and regulatory conflict. Regulatory overlap arises primarily from two things: firstly, due to the blurring of boundaries between lines-of-businesses with complex organizational structures and secondly, due to varying requirements of jurisdictional directives across geographic boundaries e.g. a securities firm with operations in US and EU would be subject different requirements of “Know-Your-Customer” (KYC) as per the PATRIOT ACT in US and MiFiD in EU. Another consequence and concomitance of regulatory change is regulatory conflict, which again, arises primarily from two things: firstly, due to diametrically opposite priorities of line-of-business and secondly, due to tension that regulatory requirements create between shareholders interests of tighter due-diligence and customer concerns of privacy. For instance, Customer Due Diligence (CDD) as per KYC requires eliciting detailed information from customers to prevent illegal activities such as money-laundering, terrorist financing or identity theft. While new customers are still more likely to comply with such stringent background checks at time of account opening, existing customers baulk at such practices as a breach of trust and privacy. As mentioned earlier regulatory compliance addresses both financial and non-financial risks. Operational risk is a non-financial risk that stems from business execution and spans people, processes, systems and information. Operational risk arising from financial processes in particular transcends other sources of such risk. Let’s look at the factors underpinning the operational risk of financial processes. The rapid pace of innovation and geographic expansion of financial institutions has resulted in proliferation and ad-hoc evolution of back-office, mid-office and front-office processes. This has had two serious implications on increasing the operational risk of financial processes: ·         Inconsistency of processes across lines-of-business, customer channels and product/service offerings. This makes it harder for the risk function to enforce a standardized risk methodology and in turn breaches harder to detect. ·         The proliferation of processes coupled with increasingly frequent change-cycles has resulted in accidental breaches and increased vulnerability to regulatory inadequacies. In summary, regulatory growth (including overlap and conflict) coupled with process proliferation and inconsistency is driving process compliance complexity In my next post I will address the implications of this process complexity on financial institutions and outline the role of BPM in lowering specific aspects of operational risk of financial processes.

    Read the article

  • Managing common code on Windows 7 (.NET) and Windows 8 (WinRT)

    - by ryanabr
    Recent announcements regarding Windows Phone 8 and the fact that it will have the WinRT behind it might make some of this less painful but I  discovered the "XmlDocument" object is in a new location in WinRT and is almost the same as it's brother in .NET System.Xml.XmlDocument (.NET) Windows.Data.Xml.Dom.XmlDocument (WinRT) The problem I am trying to solve is how to work with both types in the code that performs the same task on both Windows Phone 7 and Windows 8 platforms. The first thing I did was define my own XmlNode and XmlNodeList classes that wrap the actual Microsoft objects so that by using the "#if" compiler directive either work with the WinRT version of the type, or the .NET version from the calling code easily. public class XmlNode     { #if WIN8         public Windows.Data.Xml.Dom.IXmlNode Node { get; set; }         public XmlNode(Windows.Data.Xml.Dom.IXmlNode xmlNode)         {             Node = xmlNode;         } #endif #if !WIN8 public System.Xml.XmlNode Node { get; set ; } public XmlNode(System.Xml.XmlNode xmlNode)         {             Node = xmlNode;         } #endif     } public class XmlNodeList     { #if WIN8         public Windows.Data.Xml.Dom.XmlNodeList List { get; set; }         public int Count {get {return (int)List.Count;}}         public XmlNodeList(Windows.Data.Xml.Dom.XmlNodeList list)         {             List = list;         } #endif #if !WIN8 public System.Xml.XmlNodeList List { get; set ; } public int Count { get { return List.Count;}} public XmlNodeList(System.Xml.XmlNodeList list)         {             List = list;        } #endif     } From there I can then use my XmlNode and XmlNodeList in the calling code with out having to clutter the code with all of the additional #if switches. The challenge after this was the code that worked directly with the XMLDocument object needed to be seperate on both platforms since the method for populating the XmlDocument object is completly different on both platforms. To solve this issue. I made partial classes, one partial class for .NET and one for WinRT. Both projects have Links to the Partial Class that contains the code that is the same for the majority of the class, and the partial class contains the code that is unique to the version of the XmlDocument. The files with the little arrow in the lower left corner denotes 'linked files' and are shared in multiple projects but only exist in one location in source control. You can see that the _Win7 partial class is included directly in the project since it include code that is only for the .NET platform, where as it's cousin the _Win8 (not pictured above) has all of the code specific to the _Win8 platform. In the _Win7 partial class is this code: public partial class WUndergroundViewModel     { public static WUndergroundData GetWeatherData( double lat, double lng)         { WUndergroundData data = new WUndergroundData();             System.Net. WebClient c = new System.Net. WebClient(); string req = "http://api.wunderground.com/api/xxx/yesterday/conditions/forecast/q/[LAT],[LNG].xml" ;             req = req.Replace( "[LAT]" , lat.ToString());             req = req.Replace( "[LNG]" , lng.ToString()); XmlDocument doc = new XmlDocument();             doc.Load(c.OpenRead(req)); foreach (XmlNode item in doc.SelectNodes("/response/features/feature" ))             { switch (item.Node.InnerText)                 { case "yesterday" :                         ParseForecast( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/txt_forecast/forecastdays/forecastday" )), new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/simpleforecast/forecastdays/forecastday" )), data); break ; case "conditions" :                         ParseCurrent( new FishingControls.XmlNode (doc.SelectSingleNode("/response/current_observation" )), data); break ; case "forecast" :                         ParseYesterday( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/history/observations/observation" )),data); break ;                 }             } return data;         }     } in _win8 partial class is this code: public partial class WUndergroundViewModel     { public async static Task< WUndergroundData > GetWeatherData(double lat, double lng)         { WUndergroundData data = new WUndergroundData (); HttpClient c = new HttpClient (); string req = "http://api.wunderground.com/api/xxxx/yesterday/conditions/forecast/q/[LAT],[LNG].xml" ;             req = req.Replace( "[LAT]" , lat.ToString());             req = req.Replace( "[LNG]" , lng.ToString()); HttpResponseMessage msg = await c.GetAsync(req); string stream = await msg.Content.ReadAsStringAsync(); XmlDocument doc = new XmlDocument ();             doc.LoadXml(stream, null); foreach ( IXmlNode item in doc.SelectNodes("/response/features/feature" ))             { switch (item.InnerText)                 { case "yesterday" :                         ParseForecast( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/txt_forecast/forecastdays/forecastday" )), new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/simpleforecast/forecastdays/forecastday" )), data); break; case "conditions" :                         ParseCurrent( new FishingControls.XmlNode (doc.SelectSingleNode("/response/current_observation" )), data); break; case "forecast" :                         ParseYesterday( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/history/observations/observation")), data); break;                 }             } return data;         }     } Summary: This method allows me to have common 'business' code for both platforms that is pretty clean, and I manage the technology differences separately. Thank you tostringtheory for your suggestion, I was considering that approach.

    Read the article

  • Managing Operational Risk of Financial Services Processes – part 2/2

    - by Sanjeev Sharma
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} In my earlier blog post, I had described the factors that lead to compliance complexity of financial services processes. In this post, I will outline the business implications of the increasing process compliance complexity and the specific role of BPM in addressing the operational risk reduction objectives of regulatory compliance. First, let’s look at the business implications of increasing complexity of process compliance for financial institutions: · Increased time and cost of compliance due to duplication of effort in conforming to regulatory requirements due to process changes driven by evolving regulatory mandates, shifting business priorities or internal/external audit requirements · Delays in audit reporting due to quality issues in reconciling non-standard process KPIs and integrity concerns arising from the need to rely on multiple data sources for a given process Next, let’s consider some approaches to managing the operational risk of business processes. Financial institutions considering reducing operational risk of their processes, generally speaking, have two choices: · Rip-and-replace existing applications with new off-the shelf applications. · Extend capabilities of existing applications by modeling their data and process interactions, with other applications or user-channels, outside of the application boundary using BPM. The benefit of the first approach is that compliance with new regulatory requirements would be embedded within the boundaries of these applications. However pre-built compliance of any packaged application or custom-built application should not be mistaken as a one-shot fix for future compliance needs. The reason is that business needs and regulatory requirements inevitably out grow end-to-end capabilities of even the most comprehensive packaged or custom-built business application. Thus, processes that originally resided within the application will eventually spill outside the application boundary. It is precisely at such hand-offs between applications or between overlaying processes where vulnerabilities arise to unknown and accidental faults that potentially result in errors and lead to partial or total failure. The gist of the above argument is that processes which reside outside application boundaries, in other words, span multiple applications constitute a latent operational risk that spans the end-to-end value chain. For instance, distortion of data flowing from an account-opening application to a credit-rating system if left un-checked renders compliance with “KYC” policies void even when the “KYC” checklist was enforced at the time of data capture by the account-opening application. Oracle Business Process Management is enabling financial institutions to lower operational risk of such process ”gaps” for Financial Services processes including “Customer On-boarding”, “Quote-to-Contract”, “Deposit/Loan Origination”, “Trade Exceptions”, “Interest Claim Tracking” etc.. If you are faced with a similar challenge and need any guidance on the same feel free to drop me a note.

    Read the article

  • Managing software projects - advice needed

    - by Callum
    I work for a large government department as part of an IT team that manages and develops websites as well as stand alone web applications. We’re running in to problems somewhere in the SDLC that don’t rear their ugly head until time and budget are starting to run out. We try to be “Agile” (software specifications are not as thorough as possible, clients have direct access to the developers any time they want) and we are also in a reasonably peculiar position in that we are not allowed to make profit from the services we provide. We only service the divisions within our government department, and can only charge for the time and effort we actually put in to a project. So if we deliver a project that we have over-quoted on, we will only invoice for the actual time spent. Our software specifications are not as thorough as they could be, but they always include at a minimum: Wireframe mockups for every form view A data dictionary of all field inputs Descriptions of any business rules that affect the system Descriptions of the outputs I’m new to software management, but I’ve overseen enough software projects now to know that as soon as users start observing demos of the system, they start making a huge amount of requests like “Can we add a few more fields to this report.. can we redesign the look of this interface.. can we send an email at this part of the workflow.. can we take this button off this view.. can we make this function redirect to a different screen.. can we change some text on this screen… can we create a special account where someone can log in and get access to X… this report takes too long to run can it be optimised.. can we remove this step in the workflow… there’s got to be a better image we can put here…” etc etc etc. Some changes are tiny and can be implemented reasonably quickly.. but there could be up to 50-100 or so of such requests during the course of the SDLC. Other change requests are what clients claim they “just assumed would be part of the system” even if not explicitly spelled out in the spec. We are having a lot of difficulty managing this process. With no experienced software project managers in our team, we need to come up with a better way to both internally identify whether work being requested is “out of spec”, and be able to communicate this to a client in such a manner that they can understand why what they are asking for is “extra” work. We need a way to track this work and be transparent with it. In the spirit of Agile development where we are not spec'ing software systems in to the ground and back again before development begins, and bearing in mind that clients have access to any developer any time they want it, I am looking for some tips and pointers from experienced software project managers on how to handle this sort of "scope creep" problem, in tracking it, being transparent with it, and communicating it to clients such that they understand it. Happy to clarify anything as needed. I really appreciate anyone who takes the time to offer some advice. Thanks.

    Read the article

  • C#, Open Folder and Select multiple files

    - by Vytas999
    Hello, in C# I want to open explorer and in this explorer window must be selected some files. I do this like that: string fPath = newShabonFilePath; string arg = @"/select, "; int cnt = filePathes.Count; foreach (string s in filePathes) { if(cnt == 1) arg = arg + s; else { arg = arg + s + ","; } cnt--; } System.Diagnostics.Process.Start("explorer.exe", arg); But only the last file of "arg" is selected. How to make that all files of arg would be selected, when explorer window is opened..?

    Read the article

  • Storing source files outside project file directory in Visual Studio C++ 2009

    - by Skurmedel
    Visual Studio projects assumes all files belonging to the project are situated in the same directory as the project file, or one underneath it. For a particular project (in the non-Visual Studio sense) this is not what I want. I want to store the MSVC-specific files in another folder, because there might be other ways to build the application as well, for example with SCons. Also all the stuff MSVC splurts out clutters the source directory. Example: /source /scons /msvc <- here is where I want my MSVC-specific stuff I can add the files, in Explorer, to the source directory manually, and then link them in Visual Studio with the project. It's not the end of the world, but it annoys me a bit that Visual Studio tries to dictate the folder structure of my project. I was looking through the schemas for the project files but realized that this annoying assumption is in the IDE and not the format of the project files. Do someone know a neater way to solve this than manually linking files to the project from the source directory?

    Read the article

  • Open Folder and Select multiple files

    - by Vytas999
    In C# I want to open explorer and in this explorer window must be selected some files. I do this like that: string fPath = newShabonFilePath; string arg = @"/select, "; int cnt = filePathes.Count; foreach (string s in filePathes) { if(cnt == 1) arg = arg + s; else { arg = arg + s + ","; } cnt--; } System.Diagnostics.Process.Start("explorer.exe", arg); But only the last file of "arg" is selected. How to make that all files of arg would be selected, when explorer window is opened..?

    Read the article

  • SmartAssembly Support: How to change the maps folder

    - by Bart Read
    If you've set up SmartAssembly to store error reports in a SQL Server database, you'll also have specified a folder for the map files that are used to de-obfuscate error reports (see Figure 1). Whilst you can change the database easily enough you can't change the map folder path via the UI - if you click on it, it'll just open the folder in Explorer - but never fear, you can change it manually and fortunately it's not that difficult. (If you want to get to these settings click the Tools > Options link on the left-hand side of the SmartAssembly main window.)   Figure 1. Error reports database settings in SmartAssembly. The folder path is actually stored in the database, so you just need to open up SQL Server Management Studio, connect to the SQL Server where your error reports database is stored, then open a new query on the SmartAssembly database by right-clicking on it in the Object Explorer, then clicking New Query (see figure 2).     Figure 2. Opening a new query against the SmartAssembly error reports database in SQL Server. Now execute the following SQL query in the new query window: SELECT * FROM dbo.Information You should find that you get a result set rather like that shown in figure 3. You can see that the map folder path is stored in the MapFolderNetworkPath column.   Figure 3. Contents of the dbo.Information table, showing the map folder path I set in SmartAssembly. All I need to do to change this is execute the following SQL: UPDATE dbo.Information SET MapFolderNetworkPath = '\\UNCPATHTONEWFOLDER' WHERE MapFolderNetworkPath = '\\dev-ltbart\SAMaps' This will change the map folder path to whatever I supply in the SET clause. Once you've done this, you can verify the change by executing the following again: SELECT * FROM dbo.Information You should find the result set contains the new path you've set.

    Read the article

  • Managing .NET Deployment Configuration With Rake

    - by Liam McLennan
    Rake is a ruby internal DSL for build scripting. With (or without) the help of albacore rake makes an excellent build scripting tool for .NET projects. The albacore documentation does a good job of explaining how to build solutions with rake but there is nothing to assist with another common build task – updating configuration files. The following ruby script provides some helper methods for performing common configuration changes that are required as part of a build process.  class ConfigTasks def self.set_app_setting(config_file, key, value) ovsd_element = config_file.root.elements['appSettings'].get_elements("add[@key='#{key}']")[0] ovsd_element.attributes['value'] = value end def self.set_connection_string(config_file, name, connection_string) conn_string_element = config_file.root.elements['connectionStrings'].get_elements("add[@name='#{name}']")[0] conn_string_element.attributes['connectionString'] = connection_string end def self.set_debug_compilation(config_file, debug_compilation) compilation_element = config_file.root.elements['system.web'].get_elements("compilation")[0] compilation_element.attributes['debug'] = false end private def self.write_xml_to_file(xml_document, file) File.open(file, 'w') do |config_file| formatter = REXML::Formatters::Default.new formatter.write(xml_document, config_file) end end end To use, require the file and call the class methods, passing the configuration file name and any other parameters. require 'config_tasks' ConfigTasks.set_app_setting 'web.config', 'enableCache', 'false'

    Read the article

  • Managing Files/Folder in Content Repositories or File Systems with Oracle ADF and WebCenter

    - by Shay Shmeltzer
    One more entry in a set of entries (1,2,3) about the capabilities that WebCenter adds to ADF applications. WebCenter is basically the new Portal framework in the Oracle stack - and one key thing that portals do is work with content, allowing you to compose and publish content from files as well as save and store content. In this demo you'll see how using a set of taskflows provided by WebCenter you can add a file management, creation and viewing capabilities to a regular ADF application. To try this out you don't need any fancy content management system - we'll just use your file system for now. All you need is the WebCenter extension installed in JDeveloper, and then you can follow the demo on your own JDeveloper instance. You'll define a connection to your content repository you'll be able to add a bunch of pre-built WebCenter taskflows into your page. And suddenly you can upload/download/create and view document directly from your applicaiton. Check it out:

    Read the article

  • Managing Your First SharePoint Project or Team

    - by Mark Rackley
    (*editor’s note* If you have proper SharePoint Training, know the difference between a site and a site collection, and have the utmost respect for the knowledge of your SharePoint team skip this blog and go directly to meetdux.com, do not pass go, do not collect $200… otherwise, please proceed) Dear Mr. or Mrs. I-know-nothing-about-SharePoint-but-hey,-I-have-manager-in-my-title-so-I’ll-tell-you-how-to-your-job, Thank you so much for joining the Acme corporation. We appreciate your eagerness and willingness to jump in and help us accomplish all of our goals here at acme (these roadrunner rockets don’t make themselves). You may have noticed that we have this thing called SharePoint lying around and we have invested some time in money to make it not a complete piece of garbage. So, I thought I’d give you some pointers to help make your stay here enjoyable and productive. Yeah… you don’t really know SharePoint Just because you had a mysite at your last organization or had a SharePoint 2003 team site does NOT mean you comprehend the vastness that is SharePoint. You don’t know what’s going on behind the scenes. You don’t know what should and should not be done. No, we CAN’T just query the SQL database directly. Yes, it really does take that long. No, we can’t do that out-of-the-box. Your experience doesn’t mean as much as you think it means… Yes, I’m aware that you co-created the internet with Al Gore and have been managing projects since I was blowing up GI Joe figures with firecrackers, however SharePoint is not like anything you have worked with before from a management perspective. Please don’t tell us the proper way to do our job or tell us how “you” would do it, and PLEASE don’t utter the words “I used to do some .NET development so let me know if you get stuck and need some guidance.” It MAY be possible for a incredible project manager to manage a SharePoint project and not understand the technology, but if you force your ideas on us or treat us like we don’t really know what we’re doing then you will prove yourself to NOT be one of those types. Oh no you didn’t… Please don’t tell us how you can bring in a group of guys of Kazakhstan to do the project for $20/hr. There are many companies out there who can do some really crappy SharePoint work and we don’t want to be stuck maintaining their junk. Do you know what it means to deploy a solution? Neither do some of those companies out there. However, there are are few AWESOME consulting firms out there but $150/hr is cheap for these guys. Believe me, it’s worth it though. You get what you pay for! Show us some respect We truly do appreciate and value your opinion and experience, but when we tell you something is different in SharePoint don’t be condescending and dismiss OUR experience and opinions. We have spent a lot of time and energy learning a very complicated technology that can open up a world of possibilities when used properly. We just want to make sure it is used properly. It’s not the same as .NET development. It’s not like a regular web application. There’s more going on behind the scenes than you can possibly fathom. Have a little faith in us please and listen when we talk. You may actually learn a thing or two. Take some time to learn the technology There is hope… you don’t have to be totally worthless. Take some time to learn SharePoint. Learn what it is and what it can do. Invest some time in learning our SharePoint environment. What’s our logical architecture and taxonomy? What governance do we have in place? If you just thought “huh?” then yes, I’m talking to you. Sincerely, Your SharePoint Team (This rant is not pointed at any particular organization or person. If you think it’s about you, you are wrong. This is just a general rant based upon things people have told me and things I’ve seen. If you don’t think it applies to you, please move on. If you think you might be guilty of handling your SharePoint team the wrong way, then just please listen, learn, and have a little faith in your team. You all have the same goal in mind. Also, take the time to learn something about SharePoint, you will all be less frustrated with each other.)

    Read the article

  • Software, script or a tool to automate managing which tests to run

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. I have to update the batch file everytime a new file is added or changed. Is there a software, script or a tool, that does this automatically, or makes it easier for me to do so? I basically need it to be aware of and ask me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • Managing Spanish Code

    - by Sajith S Narayanan
    Hi All, We have a new project from a client who is Spanish and has all his Java code and comments, variables, method names in Spanish. We are not permitted to convert it into English and then use them. If any of you have worked in such a condition, can you advice what can be done to mitigate this risk as we have to do new developments and this is a major show stopper.. Their Java project has a mix of EJBs, Struts, Custom Framework and more than 10000 Java Files with atleast total of 200k lines of code (min. estimate) and is deployed using Weblogic Server 10 Regards, Dazzlers

    Read the article

  • Examine your readiness for managing Enterprise Private Cloud

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Cloud computing promises to deliver greater agility to meet demanding  business needs, operational efficiencies, and lower cost. However these promises cannot be realized and enterprises may not be able to get the best value out of their enterprise private cloud computing infrastructure without a comprehensive cloud management solution . Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Take this new self-assessment quiz that measures the readiness of your enterprise private cloud. It scores your readiness in the following areas and discover where and how you can improve to gain total cloud control over your enterprise private cloud. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Complete Cloud Lifecycle Solution Check if you are ready to manage all phases of the building, managing, and consuming an enterprise cloud. You will learn how Oracle can help build and manage a rich catalog of cloud services – whether it is Infrastructure-as-a-Service, Database-as-a-Service, or Platform-as-a-Service, all from a single product. Integrated Cloud Stack Management Integrated management of the entire cloud stack – all the way from application to disk, is very important to eliminate the integration pains and costs that customers would have to otherwise incur by trying to create a cloud environment by integrating multiple point solutions. Business-Driven Clouds It is critical that an enterprise Cloud platform is not only able to run applications but also has deep business insight and visibility. Oracle Enterprise Manager 12c enables creation of application-aware and business-driven clouds that has deep insight into applications, business services and transactions. As the leading providers of business applications and the middleware, we are able to offer you a cloud solution that is optimized for business services. Proactive Management Integration of the enterprise cloud infrastructure with support can allow cloud administrators to benefit from Automatic Service Requests (ASR), proactive patch recommendations, health checks and end-of-life advisory for all of the technology deployed within cloud. Learn more about solution for Enterprise Cloud and Cloud management by attending various sessions , demos and hand-on labs at Oracle Open World 2012 . Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Ubuntu One - Files API - Cloud - More detailed info somehwere?

    - by Brian McCavour
    I am just starting on a mobile app for Ubuntu One, and I'm reviewing the info at https://one.ubuntu.com/developer/files/store_files/cloud I find the information a bit lacking though. It's a nice reference, but for someone not familiar with it, I had to goggle search to find out what a "volume" was exactly (its kind of obvious, but never hurts to know the specifics) There's also things like: GET /api/file_storage/v1/volumes Return a JSON list of Volume Representations, one for each volume. A volume is a synced folder, or the Ubuntu One folder, owned by the user. Note that all volume paths begin with ~.: ... but there's no such thing as a JSON "list". Does it mean array ? And other things... So I was wondering if here existed another page with more detailed information. Maybe some sample request / responses or something? I could just write a little proof of concept app to answer some of these questions... but I prefer not to unless I have to. Thanks

    Read the article

  • Expire Files In A Folder: Delete Files After x Days

    - by Brett G
    I'm looking to make a "Drop Folder" in a windows shared drive that is accessible to everyone. I'd like files to be deleted automagically if they sit in the folder for more than X days. However, it seems like all methods I've found to do this, use the last modified date, last access time, or creation date of a file. I'm trying to make this a folder that a user can drop files in to share with somebody. If someone copies or moves files into here, I'd like the clock to start ticking at this point. However, the last modified date and creation date of a file will not be updated unless someone actually modifies the file. The last access time is updated too frequently... it seems that just opening a directory in windows explorer will update the last access time. Anyone know of a solution to this? I'm thinking that cataloging the hash of files on a daily basis and then expiring files based on hashes older than a certain date might be a solution.... but taking hashes of files can be time consuming. Any ideas would be greatly appreciated! Note: I've already looked at quite a lot of answers on here... looked into File Server Resource Monitor, powershell scripts, batch scripts, etc. They still use the last access time, last modified time or creation time... which, as described, do not fit the above needs.

    Read the article

  • Best way to manually sort random text files?

    - by Jane
    I have about 1000 text files and I need to view each, and move it to a folder if it's the correct one. I can only do basic sorting by length/size, and I can't grep because the text is random. How can I do this besides manually openiing + saving each in gedit. I'm on Ubuntu Linux. Thanks

    Read the article

  • undelete big files - mission impossible?

    - by johnrembo
    Hi, I've accidentaly deleted outlook.pst (6.7GB) file, while there was only 400MB free space left on primary NTFS partition (winxp). I've tried several recovery tools to get this file back. "Ontrack Easy Recovery Pro" found 0 pst files (complete scan mode), while "Recover My Files" in sector scan mode found 5 pst's, but 4 of them of sizes from 3 to 28 KB, while the 5th one - 1Gb. I've managed to succesfuly recover 1Gb pst file, which was 1 year old copy (the one used after the latest windows reinstall). Now, I'm frustrated and confused Why 1 year old file was succesfuly recovered if there were only 400MB left on primary partition? Where's 6.7GB file gone? I did some reading (i.e. here), and it seems that there's almost no probability to retrieve the file I'm looking for, but wait - none of recovery tools i've used found zero-sized pst file, moreover - if due to fragmentation a file might be corrupted - we could use scanpst.exe to fix some errors and survive with 10 or 100 emails missing - whatever. Could you please recommend some more sophisticated recovery tools for this particular task? Appretiate your help - thanks in advance

    Read the article

  • Disk fragmentation when dealing with many small files

    - by Zorlack
    On a daily basis we generate about 3.4 Million small jpeg files. We also delete about 3.4 Million 90 day old images. To date, we've dealt with this content by storing the images in a hierarchical manner. The heriarchy is something like this: /Year/Month/Day/Source/ This heirarchy allows us to effectively delete days worth of content across all sources. The files are stored on a Windows 2003 server connected to a 14 disk SATA RAID6. We've started having significant performance issues when writing-to and reading-from the disks. This may be due to the performance of the hardware, but I suspect that disk fragmentation may be a culprit at well. Some people have recommended storing the data in a database, but I've been hesitant to do this. An other thought was to use some sort of container file, like a VHD or something. Does anyone have any advice for mitigating this kind of fragmentation? Additional Info: The average file size is 8-14KB Format information from fsutil: NTFS Volume Serial Number : 0x2ae2ea00e2e9d05d Version : 3.1 Number Sectors : 0x00000001e847ffff Total Clusters : 0x000000003d08ffff Free Clusters : 0x000000001c1a4df0 Total Reserved : 0x0000000000000000 Bytes Per Sector : 512 Bytes Per Cluster : 4096 Bytes Per FileRecord Segment : 1024 Clusters Per FileRecord Segment : 0 Mft Valid Data Length : 0x000000208f020000 Mft Start Lcn : 0x00000000000c0000 Mft2 Start Lcn : 0x000000001e847fff Mft Zone Start : 0x0000000002163b20 Mft Zone End : 0x0000000007ad2000

    Read the article

  • unable to copy release folder to desktop using AppleScript

    - by Miraaj
    Hi all, I tried this script to copy release folder of one of my Xcode projects to desktop: tell application "Finder" set targetFolder to folder "release" of folder "build" of folder "8_15pm" of folder "26th_March" of folder "XYZ" of startup disk set destinationFolder to folder "Desktop" of folder "miraaj" of folder "Users" of startup disk copy targetFolder to destinationFolder end tell I was expecting that I will obtain a folder named as release on my desktop but I did not get any :( Can anyone suggest me where I am wrong or some better way to do this?? Thanks, Miraaj

    Read the article

  • Part 7: EBS Modifications and Flagged Files in R12

    - by volker.eckardt(at)oracle.com
    Let me, based on my previous blog, explain the procedure of flagged files a bit better and facilitate the same with screenshots. Flagged files is a concept within the Oracle eBusiness Suite (EBS) release 12, where you flag a standard deployment file, let’s say a Forms file, a Package or a Java class file. When you run the patch analyse, the list of flagged files will be checked and in case one of these files gets patched, the analyse report will tell you. Note: This functionality is also available in release 11, here it is implemented and known as “applcust.txt”. You can flag as many files as you want, in whatever relationship they are with your customizations. In addition to the flag itself you can add a comment. You should use this comment to point to your customization reference (here XXAR_RPT_066 or XXAP_CUST_030). Consider the following two cases: You have created your own report, based on a standard report. In this case you will flag the report file itself, and the key views used. When a patch updates one of these files, you will be informed and can initiate a proper review and testing. (ex.: first line for ARXCTA.rdf) You have created an extensive personalization and because it is business critical you like to be informed if the page definition gets updated. In this case you register the PG.xml file as flagged file. (ex.: second line below for CreateExtBankAcctPG.xml) The menu path to register flagged files is the following: (R) System Administrator > (M) Oracle Applications Manager > Site Map > Maintenance > Register Flagged Files     Your DBA should now run the Patch Analyse every time he is going to apply a new patch. (R) System Administrator > (M) Oracle Applications Manager > Patch Wizard > Task “Recommend/Analyze Patches” The screenshot above shows the impact summary. For this blog entry the number “2” titled “Flagged Files Changed“ is in our focus. When you click the “2” you will get a similar screen like the first in this blog, showing you exactly the files which will get patched if you continue and apply this patch in this environment right now. Note: It is also shown that just 20% of all patch files will get applied. This situation might be different in case your environments are on a different patch level. For sure also the customization impact might then be different. The flagging step can be done directly in the Oracle Applications Manager.  Our developers are responsible for. To transport such a flag+comment we use a FNDLOAD script. It is suggested to put the flagged files data file directly into your CEMLI patch. Herewith the flagged files registration will be executed right at the same time when the patch gets applied. Process Steps: Developer: Builds CEMLI Reviews code and identifies key standard objects referenced Determines standard object files and flags them Creates FNDLOAD file and adds the same to the CEMLI patch DBA: Executes for every new Oracle standard patch the patch analyse in a representative environment Checks and retrieves the flagged files and comments Sends flagged file list back to development team for analyse / retest Developer: Analyses / Updates / Retests effected CEMLIs Prerequisite: The patch analyse has to be executed in an environment where flagged files have been registered. (If you run the patch analyse in a vanilla or outdated environment (compared to your PROD), the analyse will not be so helpful!) When to start with Flagged files? Start right now utilizing this feature. It is an invest to improve the production stability and fulfil your SLA!   Summary Flagged Files is a very helpful EBS R12 technique when analysing patches. Implement a procedure within your development process to maintain such flags. Let the DBA run the patch analyse in an environment with a similar patch and customization level as your current production.   Related Links: EBS Patching Procedures - Chapter 2-13 - Registered Flagged Files

    Read the article

  • Managing Joomla via Android

    Surprisingly, it was only today that I actually looked for possible solutions to write more content for my blog. Since quite some time I'm using my Samsung Galaxy Tab 10.1 for all kind of social media activities like Google+, FB, etc. but also for my casual mail during the evening hours. And yes, I feel a little bit guilty about missing the chance to use my tablet to write some content here... OK, only a little bit. ;-) These are not the droids you are looking for But those lazy times are over! While searching the Play Store with the expression 'joomla' I got three interesting hits: - Joomla Admin Mobile! - Joooid - Joomla! Security Checklist After reading the reviews I installed the two later apps. Joomla! Security Checklist The author clearly outlines here that the app is primarily for his personal purpose to have safety checklist at hand at anytime. I guess that any reader of this article has an Android based smartphone or tablet, so that simple app should be part of your toolbox when using Joomla! for your websites. Joooid plugin & app Although I was looking for an app that could work with the default XML RPC interface of Joomla I have to admit that this combination of an enhanced Web service suits me better, mainly due to performance reason. The official website has not only the downloads for Joomla versions 1.5 - 2.5 but also very good and easy to follow step-by-step instructions to prepare your server for the Android app. It will take you less than 5 minutes to get it up and running. For safety reasons, I recommend that you should configure your Web server to have an additional authentication layer on the plugins folder. The smartphone app has the ability to run against HTTP authentication. Personally, I like the look and feel of the app. It is a little bit different compared to the web UI but still easy to use. In fact, this article is the first one written in the Joooid app. At the moment, I only miss the ability to have list tags. Quick and easy Writing full-fledged articles with images, a couple of hyperlinks and some styling here and there should be left to the desktop. At least for the moment. Let's see whether I'm going to change my mind on this during the upcoming months... I'll give it a try, and hope to publish at least once per month to write some content using Joooid. Actually, it would be great to have some feedback about other Joomla! clients in the wild.

    Read the article

  • Managing JS and CSS for a static HTML web application

    - by Josh Kelley
    I'm working on a smallish web application that uses a little bit of static HTML and relies on JavaScript to load the application data as JSON and dynamically create the web page elements from that. First question: Is this a fundamentally bad idea? I'm unclear on how many web sites and web applications completely dispense with server-side generation of HTML. (There are obvious disadvantages of JS-only web apps in the areas of graceful degradation / progressive enhancement and being search engine friendly, but I don't believe that these are an issue for this particular app.) Second question: What's the best way to manage the static HTML, JS, and CSS? For my "development build," I'd like non-minified third-party code, multiple JS and CSS files for easier organization, etc. For the "release build," everything should be minified, concatenated together, etc. If I was doing server-side generation of HTML, it'd be easy to have my web framework generate different development versus release HTML that includes multiple verbose versus concatenated minified code. But given that I'm only doing any static HTML, what's the best way to manage this? (I realize I could hack something together with ERB or Perl, but I'm wondering if there are any standard solutions.) In particular, since I'm not doing any server-side HTML generation, is there an easy, semi-standard way of setting up my static HTML so that it contains code like <script src="js/vendors/jquery.js"></script> <script src="js/class_a.js"></script> <script src="js/class_b.js"></script> <script src="js/main.js"></script> at development time and <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src="js/entire_app.min.js"></script> for release?

    Read the article

  • Managing Kindle Fire with on 12.04 via Micro-USB

    - by pirtle
    To begin, I have read both Is there a way to get a Kindle Fire to work with 12.04? and How can I transfer files to a Kindle Fire with a Micro-USB cable? My problem is that I am unable to mount my Kindle Fire in order to add books to it. I have installed calibre, but it is unable to manage any devices until the computer itself has recognized it. The latter post had an excellent answer (provided by @jeremiah) that was making some progress. Unfortunately, I think I don't know enough about the -t flag used with mount. This is what I've done... Ran dmesg to locate the device: [ 3.920886] sd 6:0:0:0: [sdb] Attached SCSI removable disk Confirmed it's location: $ sudo ls -l /dev/disk/by-id lrwxrwxrwx 1 root root 9 Aug 18 15:52 usb-Amazon_Kindle_3C6C002600000001-0:0 -> ../../sdb So we know that my Kindle is recognized on /dev/sdb. I then used the mount command suggested by @jeremiah: $ sudo mount -t ext3 /dev/sdb/ /mnt/kindle/ mount: no medium found on /dev/sdb The same error occurs for sudo mount /dev/sdb /mnt/kindle. Note: I have created the 'kindle' directory in 'mnt' Any suggestions?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >