Search Results

Search found 28864 results on 1155 pages for 'ob start'.

Page 235/1155 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Inside BackgroundWorker

    - by João Angelo
    The BackgroundWorker is a reusable component that can be used in different contexts, but sometimes with unexpected results. If you are like me, you have mostly used background workers while doing Windows Forms development due to the flexibility they offer for running a background task. They support cancellation and give events that signal progress updates and task completion. When used in Windows Forms, these events (ProgressChanged and RunWorkerCompleted) get executed back on the UI thread where you can freely access your form controls. However, the logic of the progress changed and worker completed events being invoked in the thread that started the background worker is not something you get directly from the BackgroundWorker, but instead from the fact that you are running in the context of Windows Forms. Take the following example that illustrates the use of a worker in three different scenarios: – Console Application or Windows Service; – Windows Forms; – WPF. using System; using System.ComponentModel; using System.Threading; using System.Windows.Forms; using System.Windows.Threading; class Program { static AutoResetEvent Synch = new AutoResetEvent(false); static void Main() { var bw1 = new BackgroundWorker(); var bw2 = new BackgroundWorker(); var bw3 = new BackgroundWorker(); Console.WriteLine("DEFAULT"); var unspecializedThread = new Thread(() => { OutputCaller(1); SynchronizationContext.SetSynchronizationContext( new SynchronizationContext()); bw1.DoWork += (sender, e) => OutputWork(1); bw1.RunWorkerCompleted += (sender, e) => OutputCompleted(1); // Uses default SynchronizationContext bw1.RunWorkerAsync(); }); unspecializedThread.IsBackground = true; unspecializedThread.Start(); Synch.WaitOne(); Console.WriteLine(); Console.WriteLine("WINDOWS FORMS"); var windowsFormsThread = new Thread(() => { OutputCaller(2); SynchronizationContext.SetSynchronizationContext( new WindowsFormsSynchronizationContext()); bw2.DoWork += (sender, e) => OutputWork(2); bw2.RunWorkerCompleted += (sender, e) => OutputCompleted(2); // Uses WindowsFormsSynchronizationContext bw2.RunWorkerAsync(); Application.Run(); }); windowsFormsThread.IsBackground = true; windowsFormsThread.SetApartmentState(ApartmentState.STA); windowsFormsThread.Start(); Synch.WaitOne(); Console.WriteLine(); Console.WriteLine("WPF"); var wpfThread = new Thread(() => { OutputCaller(3); SynchronizationContext.SetSynchronizationContext( new DispatcherSynchronizationContext()); bw3.DoWork += (sender, e) => OutputWork(3); bw3.RunWorkerCompleted += (sender, e) => OutputCompleted(3); // Uses DispatcherSynchronizationContext bw3.RunWorkerAsync(); Dispatcher.Run(); }); wpfThread.IsBackground = true; wpfThread.SetApartmentState(ApartmentState.STA); wpfThread.Start(); Synch.WaitOne(); } static void OutputCaller(int workerId) { Console.WriteLine( "bw{0}.{1} | Thread: {2} | IsThreadPool: {3}", workerId, "RunWorkerAsync".PadRight(18), Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.IsThreadPoolThread); } static void OutputWork(int workerId) { Console.WriteLine( "bw{0}.{1} | Thread: {2} | IsThreadPool: {3}", workerId, "DoWork".PadRight(18), Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.IsThreadPoolThread); } static void OutputCompleted(int workerId) { Console.WriteLine( "bw{0}.{1} | Thread: {2} | IsThreadPool: {3}", workerId, "RunWorkerCompleted".PadRight(18), Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.IsThreadPoolThread); Synch.Set(); } } Output: //DEFAULT //bw1.RunWorkerAsync | Thread: 3 | IsThreadPool: False //bw1.DoWork | Thread: 4 | IsThreadPool: True //bw1.RunWorkerCompleted | Thread: 5 | IsThreadPool: True //WINDOWS FORMS //bw2.RunWorkerAsync | Thread: 6 | IsThreadPool: False //bw2.DoWork | Thread: 5 | IsThreadPool: True //bw2.RunWorkerCompleted | Thread: 6 | IsThreadPool: False //WPF //bw3.RunWorkerAsync | Thread: 7 | IsThreadPool: False //bw3.DoWork | Thread: 5 | IsThreadPool: True //bw3.RunWorkerCompleted | Thread: 7 | IsThreadPool: False As you can see the output between the first and remaining scenarios is somewhat different. While in Windows Forms and WPF the worker completed event runs on the thread that called RunWorkerAsync, in the first scenario the same event runs on any thread available in the thread pool. Another scenario where you can get the first behavior, even when on Windows Forms or WPF, is if you chain the creation of background workers, that is, you create a second worker in the DoWork event handler of an already running worker. Since the DoWork executes in a thread from the pool the second worker will use the default synchronization context and the completed event will not run in the UI thread.

    Read the article

  • Integrating a Progress Bar into a Wizard

    - by Geertjan
    Normally, when you create a wizard, as described here, and you have your own iterator, you'll have a class signature like this: public final class MyWizardWizardIterator implements WizardDescriptor.InstantiatingIterator<WizardDescriptor> { Let's now imagine that you've got some kind of long running process your wizard needs to perform. Maybe the wizard needs to connect to something, which could take some time. Start by adding a new dependency on the Progress API, which gives you the classes that access the NetBeans Platform's progress functionality. Now all we need to do is change the class signature very slightly: public final class MyWizardWizardIterator implements WizardDescriptor.ProgressInstantiatingIterator<WizardDescriptor> { Take a look at the part of the signature above that is highlighted. I.e., use WizardDescriptor.ProgressInstantiatingIterator instead of WizardDescriptor.InstantiatingIterator. Now you will need to implement a new instantiate method, one that receives a ProgressHandle. The other instantiate method, i.e., the one that already existed, should never be accessed anymore, and so you can add an assert to that effect: @Override public Set<?> instantiate() throws IOException {     throw new AssertionError("instantiate(ProgressHandle) " //NOI18N             + "should have been called"); //NOI18N } @Override public Set instantiate(ProgressHandle ph) throws IOException {     return Collections.emptySet(); } OK. Let's now add some code to make our progress bar work: @Override public Set instantiate(ProgressHandle ph) throws IOException {     ph.start();     ph.progress("Processing...");     try {         //Simulate some long process:         Thread.sleep(2500);     } catch (InterruptedException ex) {         Exceptions.printStackTrace(ex);     }     ph.finish();     return Collections.emptySet(); } And, maybe even more impressive, you can also do this: @Override public Set instantiate(ProgressHandle ph) throws IOException {     ph.start(1000);     ph.progress("Processing...");     try {         //Simulate some long process:         ph.progress("1/4 complete...", 250);         Thread.sleep(2500);         ph.progress("1/2 complete...", 500);         Thread.sleep(5000);         ph.progress("3/4 complete...", 750);         Thread.sleep(7500);         ph.progress("Complete...", 1000);         Thread.sleep(1000);     } catch (InterruptedException ex) {         Exceptions.printStackTrace(ex);     }     ph.finish();     return Collections.emptySet(); } The screenshots above show you what you should see when the Finish button is clicked in each case.

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

  • Part 8: How to name EBS Customizations

    - by volker.eckardt(at)oracle.com
    You might wonder why I am discussing this here. The reason is simple: nearly every project has a bit different naming conventions, which makes a the life always a bit complicated (for developers, but also setup responsible, and also for consultants).  Although we always create a document to describe the technical object naming conventions, I have rarely seen a dedicated document  with functional naming conventions. To be precisely, from my stand point, there should always be one global naming definition for an implementation! Let me discuss some related questions: What is the best convention for the customization reference? How to name database objects (tables, packages etc.)? How to name functional objects like Value Sets, Concurrent Programs, etc. How to separate customizations from standard objects best? What is the best convention for the customization reference? The customization reference is the key you use to reference your customization from other lists, from the project plan etc. Usually it is something like XXHU_CONV_22 (HU=customer abbreviation, CONV=Conversion object #22) or XXFA_DEPRN_RPT_02 (FA=Fixed Assets, DEPRN=Short object group, here depreciation, RPT=Report, 02=2nd report in this area) As this is just a reference (not an object name yet), I would prefer the second option. XX=Customization, FA=Main EBS Module linked (you may have sometimes more, but FA is the main) DEPRN_RPT=Short name to specify the customization 02=a unique number Important here is that the HU isn’t used, because XX is enough to mark a custom object, and the 3rd+4th char can be used by the EBS module short name. How to name database objects (tables, packages etc.)? I was leading different developer teams, and I know that one common way is it to take the Customization reference and add more chars behind to classify the object (like _V for view and _T1 for triggers etc.). The only concern I have with this approach is the reusability. If you name your view XXFA_DEPRN_RPT_02_V, no one will by choice reuse this nice view, as it seams to be specific for this CEMLI. My suggestion is rather to name the view XXFA_DEPRN_PERIODS_V and allow herewith reusability for other CEMLIs (although the view will be deployed primarily with CEMLI package XXFA_DEPRN_RPT_02). (check also one of the following Blogs where I will talk about deployment.) How to name Value Sets, Concurrent Programs, etc. For Value Sets I would go with the same convention as for database objects, starting with XX<Module> …. For Concurrent Programs the situation is a bit different. This “object” is seen and used by a lot of users, and they will search for. In many projects it is common to start again with the company short name, or with XX. My proposal would differ. If you have created your own report and you name it “XX: Invoice Report”, the user has to remember that this report does not start with “I”, it starts with X. Would you like typing an X if you are looking for an Invoice report? No, you wouldn’t! So my advise would be to name it:   “Invoice Report (XXAP)”. Still we know it is custom (because of the XXAP), but the end user will type the key “i” to get it (and will see similar reports starting also with “i”). I hope that the general schema behind has now become obvious. How to separate customizations from standard objects best? I would not have this section here if the naming would not play an important role. Unfortunately, we can not always link a custom application to our own object, therefore the naming is really important. In the file system structure we use our $XXyy_TOP, in JAVA_TOP it is perhaps also “xx” in front. But in the database itself? Although there are different concepts in place, still many implementations are using the standard “apps” approach, means custom objects are stored in the apps schema (which should not cause any trouble). Final advise: review the naming conventions regularly, once a month. You may have to add more! And, publish them! To summarize: Technical and functional customized objects should always follow a naming convention. This naming convention should be project wide, and only one place shall be used to maintain (like in a Wiki). If the name is for the end user, rather put a customization identifier at the end; if it is an internal name, start with XX…

    Read the article

  • Using BizTalk to bridge SQL Job and Human Intervention (Requesting Permission)

    - by Kevin Shyr
    I start off the process with either a BizTalk Scheduler (http://biztalkscheduledtask.codeplex.com/releases/view/50363) or a manual file drop of the XML message.  The manual file drop is to allow the SQL  Job to call a "File Copy" SSIS step to copy the trigger file for the next process and allows SQL  Job to be linked back into BizTalk processing. The Process Trigger XML looks like the following.  It is basically the configuration hub of the business process <ns0:MsgSchedulerTriggerSQLJobReceive xmlns:ns0="urn:com:something something">   <ns0:IsProcessAsync>YES</ns0:IsProcessAsync>   <ns0:IsPermissionRequired>YES</ns0:IsPermissionRequired>   <ns0:BusinessProcessName>Data Push</ns0:BusinessProcessName>   <ns0:EmailFrom>[email protected]</ns0:EmailFrom>   <ns0:EmailRecipientToList>[email protected]</ns0:EmailRecipientToList>   <ns0:EmailRecipientCCList>[email protected]</ns0:EmailRecipientCCList>   <ns0:EmailMessageBodyForPermissionRequest>This message was sent to request permission to start the Data Push process.  The SQL Job to be run is WeeklyProcessing_DataPush</ns0:EmailMessageBodyForPermissionRequest>   <ns0:SQLJobName>WeeklyProcessing_DataPush</ns0:SQLJobName>   <ns0:SQLJobStepName>Push_To_Production</ns0:SQLJobStepName>   <ns0:SQLJobMinToWait>1</ns0:SQLJobMinToWait>   <ns0:PermissionRequestTriggerPath>\\server\ETL-BizTalk\Automation\TriggerCreatedByBizTalk\</ns0:PermissionRequestTriggerPath>   <ns0:PermissionRequestApprovedPath>\\server\ETL-BizTalk\Automation\Approved\</ns0:PermissionRequestApprovedPath>   <ns0:PermissionRequestNotApprovedPath>\\server\ETL-BizTalk\Automation\NotApproved\</ns0:PermissionRequestNotApprovedPath> </ns0:MsgSchedulerTriggerSQLJobReceive>   Every node of this schema was promoted to a distinguished field so that the values can be used for decision making in the orchestration.  The first decision made is on the "IsPermissionRequired" field.     If permission is required (IsPermissionRequired=="YES"), BizTalk will use the configuration info in the XML trigger to format the email message.  Here is the snippet of how the email message is constructed. SQLJobEmailMessage.EmailBody     = new Eai.OrchestrationHelpers.XlangCustomFormatters.RawString(         MsgSchedulerTriggerSQLJobReceive.EmailMessageBodyForPermissionRequest +         "<br><br>" +         "By moving the file, you are either giving permission to the process, or disapprove of the process." +         "<br>" +         "This is the file to move: \"" + PermissionTriggerToBeGenereatedHere +         "\"<br>" +         "(You may find it easier to open the destination folder first, then navigate to the sibling folder to get to this file)" +         "<br><br>" +         "To approve, move(NOT copy) the file here: " + MsgSchedulerTriggerSQLJobReceive.PermissionRequestApprovedPath +         "<br><br>" +         "To disapprove, move(NOT copy) the file here: " + MsgSchedulerTriggerSQLJobReceive.PermissionRequestNotApprovedPath +         "<br><br>" +         "The file will be IMMEDIATELY picked up by the automated process.  This is normal.  You should receive a message soon that the file is processed." +         "<br>" +         "Thank you!"     ); SQLJobSendNotification(Microsoft.XLANGs.BaseTypes.Address) = "mailto:" + MsgSchedulerTriggerSQLJobReceive.EmailRecipientToList; SQLJobEmailMessage.EmailBody(Microsoft.XLANGs.BaseTypes.ContentType) = "text/html"; SQLJobEmailMessage(SMTP.Subject) = "Requesting Permission to Start the " + MsgSchedulerTriggerSQLJobReceive.BusinessProcessName; SQLJobEmailMessage(SMTP.From) = MsgSchedulerTriggerSQLJobReceive.EmailFrom; SQLJobEmailMessage(SMTP.CC) = MsgSchedulerTriggerSQLJobReceive.EmailRecipientCCList; SQLJobEmailMessage(SMTP.EmailBodyFileCharset) = "UTF-8"; SQLJobEmailMessage(SMTP.SMTPHost) = "localhost"; SQLJobEmailMessage(SMTP.MessagePartsAttachments) = 2;   After the Permission request email is sent, the next step is to generate the actual Permission Trigger file.  A correlation set is used here on SQLJobName and a newly generated GUID field. <?xml version="1.0" encoding="utf-8"?><ns0:SQLJobAuthorizationTrigger xmlns:ns0="somethingsomething"><SQLJobName>Data Push</SQLJobName><CorrelationGuid>9f7c6b46-0e62-46a7-b3a0-b5327ab03753</CorrelationGuid></ns0:SQLJobAuthorizationTrigger> The end user (the human intervention piece) will either grant permission for this process, or deny it, by moving the Permission Trigger file to either the "Approved" folder or the "NotApproved" folder.  A parallel Listen shape is waiting for either response.   The next set of steps decide how the SQL Job is to be called, or whether it is called at all.  If permission denied, it simply sends out a notification.  If permission is granted, then the flag (IsProcessAsync) in the original Process Trigger is used.  The synchonous part is not really synchronous, but a loop timer to check the status within the calling stored procedure (for more information, check out my previous post:  http://geekswithblogs.net/LifeLongTechie/archive/2010/11/01/execute-sql-job-synchronously-for-biztalk-via-a-stored-procedure.aspx)  If it's async, then the sp starts the job and BizTalk sends out an email.   And of course, some error notification:   Footnote: The next version of this orchestration will have an additional parallel line near the Listen shape with a Delay built in and a Loop to send out a daily reminder if no response has been received from the end user.  The synchronous part is used to gather results and execute a data clean up process so that the SQL Job can be re-tried.  There are manu possibilities here.

    Read the article

  • Rebuilding CoasterBuzz, Part II: Hot data objects

    - by Jeff
    This is the second post, originally from my personal blog, in a series about rebuilding one of my Web sites, which has been around for 12 years. More: Part I: Evolution, and death to WCF After the rush to get moving on stuff, I temporarily lost interest. I went almost two weeks without touching the project, in part because the next thing on my backlog was doing up a bunch of administrative pages. So boring. Unfortunately, because most of the site's content is user-generated, you need some facilities for editing data. CoasterBuzz has a database full of amusement parks and roller coasters. The entities enjoy the relationships that you would expect, though they're further defined by "instances" of a coaster, to define one that has moved between parks as one, with different names and operational dates. And of course, there are pictures and news items, too. It's not horribly complex, except when you have to account for a name change and display just the newest name. In all previous versions, data access was straight SQL. As so much of the old code was rooted in 2003, with some changes in 2008, there wasn't much in the way of ORM frameworks going on then. Let me rephrase that, I mostly wasn't interested in ORM's. Since that time, I used a little LINQ to SQL in some projects, and a whole bunch of nHibernate while at Microsoft. Through all of that experience, I have to admit that these frameworks are often a bigger pain in the ass than not. They're great for basic crud operations, but when you start having all kinds of exotic relationships, they get difficult, and generate all kinds of weird SQL under the covers. The black box can quickly turn into a black hole. Sometimes you end up having to build all kinds of new expertise to do things "right" with a framework. Still, despite my reservations, I used the newer version of Entity Framework, with the "code first" modeling, in a science project and I really liked it. Since it's just a right-click away with NuGet, I figured I'd give it a shot here. My initial effort was spent defining the context class, which requires a bit of work because I deviate quite a bit from the conventions that EF uses, starting with table names. Then throw some partial querying of certain tables (where you'll find image data), and you're splitting tables across several objects (navigation properties). I won't go into the details, because these are all things that are well documented around the Internet, but there was a minor learning curve there. The basics of reading data using EF are fantastic. For example, a roller coaster object has a park associated with it, as well as a number of instances (if it was ever relocated), and there also might be a big banner image for it. This is stupid easy to use because it takes one line of code in your repository class, and by the time you pass it to the view, you have a rich object graph that has everything you need to display stuff. Likewise, editing simple data is also, well, simple. For this goodness, thank the ASP.NET MVC framework. The UpdateModel() method on the controllers is very elegant. Remember the old days of assigning all kinds of properties to objects in your Webforms code-behind? What a time consuming mess that used to be. Even if you're not using an ORM tool, having hydrated objects come off the wire is such a time saver. Not everything is easy, though. When you have to persist a complex graph of objects, particularly if they were composed in the user interface with all kinds of AJAX elements and list boxes, it's not just a simple matter of submitting the form. There were a few instances where I ended up going back to "old-fashioned" SQL just in the interest of time. It's not that I couldn't do what I needed with EF, it's just that the efficiency, both my own and that of the generated SQL, wasn't good. Since EF context objects expose a database connection object, you can use that to do the old school ADO.NET stuff you've done for a decade. Using various extension methods from POP Forums' data project, it was a breeze. You just have to stick to your decision, in this case. When you start messing with SQL directly, you can't go back in the same code to messing with entities because EF doesn't know what you're changing. Not really a big deal. There are a number of take-aways from using EF. The first is that you write a lot less code, which has always been a desired outcome of ORM's. The other lesson, and I particularly learned this the hard way working on the MSDN forums back in the day, is that trying to retrofit an ORM framework into an existing schema isn't fun at all. The CoasterBuzz database isn't bad, but there are design decisions I'd make differently if I were starting from scratch. Now that I have some of this stuff done, I feel like I can start to move on to the more interesting things on the backlog. There's a lot to do, but at least it's fun stuff, and not more forms that will be used infrequently.

    Read the article

  • Mounting ddrescue image after recovery (in over my head)

    - by BorgDomination
    I'm having problems mounting the recovery image. I've tried to mount the image multiple ways. quark@DS9 ~ $ sudo mount -t ext4 /media/jump1/1recover/sdb1.img /mnt mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so quark@DS9 ~ $ sudo mount -r -o loop /media/jump1/1recover/sdb1.img recover mount: you must specify the filesystem type quark@DS9 ~ $ sudo mount /media/jump1/1recover/sdb1.img mnt mount: you must specify the filesystem type It doesn't even give me detailed information on the file I just made, nautilus says it's 160gb. quark@DS9 ~ $ file /media/jump1/1recover/sdb1.img /media/jump1/1recover/sdb1.img: data quark@DS9 ~ $ mmls /media/jump1/1recover/sdb1.img Cannot determine partition type I'm not sure what I'm doing wrong or if I started this process incorrectly from the beginning. I've outlined what I've done so far below. I'm clueless, I'd appreciate if someone had some input for me. What I have done from the beginning My laptop has two hard drives. One has the dual boot Win7 / Linux Mint system files. Secondary one contained my /home folder. The laptop was jarred and the /home disk was broken. I tried a LiveCD recovery, it failed. Wouldn't even load a Live session with the disk installed. So I turned to ddrescue. quark@DS9 ~ $ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009fc18 Device Boot Start End Blocks Id System /dev/sda1 * 2048 112642047 56320000 7 HPFS/NTFS/exFAT /dev/sda2 138033152 312580095 87273472 83 Linux /dev/sda3 112644094 138033151 12694529 5 Extended /dev/sda5 112644096 132173823 9764864 83 Linux /dev/sda6 132175872 138033151 2928640 82 Linux swap / Solaris Partition table entries are not in disk order Disk /dev/sdb: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002a8ea Device Boot Start End Blocks Id System /dev/sdb1 * 63 312576704 156288321 83 Linux Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xed6d054b Device Boot Start End Blocks Id System /dev/sdc1 63 1953520064 976760001 7 HPFS/NTFS/exFAT sda - 160g internal, holds all system files and all computer functions. sdb - 160g internal, BROKEN, contains about 140g of data I'd like to recover. sdc - 1T external, contains recovery image. Only place that has space to do all this. From this site, https://apps.education.ucsb.edu/wiki/Ddrescue I used this script to create an image of the broken hard drive. I changed the destination to the external USB drive. #!/bin/sh prt=sdb1 src=/dev/$prt dst=/media/jump1/1recover/$prt.img log=$dst.log sudo time ddrescue --no-split $src $dst $log sudo time ddrescue --direct --max-retries=3 $src $dst $log sudo time ddrescue --direct --retrim --max-retries=3 $src $dst $log Everything looked like it came off without a hitch: quark@DS9 ~ $ sudo bash recover1 Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 0 B, errsize: 0 B, errors: 0 Current status rescued: 160039 MB, errsize: 4096 B, current rate: 35588 B/s ipos: 3584 B, errors: 1, average rate: 22859 kB/s opos: 3584 B, time from last successful read: 0 s Finished 12.78user 1060.42system 1:56:41elapsed 15%CPU (0avgtext+0avgdata 4944maxresident)k 312580958inputs+0outputs (1major+601minor)pagefaults 0swaps Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 160039 MB, errsize: 4096 B, errors: 1 Current status rescued: 160039 MB, errsize: 1024 B, current rate: 0 B/s ipos: 1536 B, errors: 1, average rate: 13 B/s opos: 1536 B, time from last successful read: 1.3 m Finished 0.00user 0.00system 3:43.95elapsed 0%CPU (0avgtext+0avgdata 4944maxresident)k 238inputs+0outputs (3major+374minor)pagefaults 0swaps Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 160039 MB, errsize: 1024 B, errors: 1 Current status rescued: 160039 MB, errsize: 1024 B, current rate: 0 B/s ipos: 1536 B, errors: 1, average rate: 0 B/s opos: 1536 B, time from last successful read: 3.7 m Finished 0.00user 0.00system 3:43.56elapsed 0%CPU (0avgtext+0avgdata 4944maxresident)k 8inputs+0outputs (0major+376minor)pagefaults 0swaps It looks like, from where I'm standing it worked perfectly. Here's the log: # Rescue Logfile. Created by GNU ddrescue version 1.14 # Command line: ddrescue --direct --retrim --max-retries=3 /dev/sdb1 /media/jump1/1recover/sdb1.img /media/jump1/1recover/sdb1.img.log # current_pos current_status 0x00000600 + # pos size status 0x00000000 0x00000400 + 0x00000400 0x00000400 - 0x00000800 0x254314FC00 + I'm not sure how to proceed. Does this mean all of my data is lost???????? Appreciate ANY input!

    Read the article

  • Some Early Considerations

    - by Chris Massey
    Following on from my previous post, I want to say "thank you" to everyone who has got in touch and got involved – you are pioneers! An update on where we are right now: paper prototypes v1 To be more specific, we’ve picked two of the ideas that seem to have more pros than cons, turned them into Balsamiq mockups, and are getting them fleshed out with realistic content. We’ll initially make these available to the aforementioned pioneers (thank you again), roll in the feedback, and then open up to get more data on what works and what doesn’t. If you’ve got any questions about this (or what we’re working on right now), feel free to ask me in the comments below. I’ve had a few people express an interest in the process we’re going through, and I’m more than happy to share details more frequently as we go along – not least because you, dear reader, will help us stay on target and create something Good. To start with, here’s a quick flashback to bring you all up to speed. A Brief Retrospective As you may already know, we’re creating a new publishing asset specifically focused on providing great content for web developers. We don’t yet know exactly what this thing will look like, or exactly how it will work, but we know we want to create something that is useful different. For my part, I’m seriously excited at the prospect of building a genuinely digital publishing system (as opposed to what most publishing is these days, which is print-style publishing which just happens to be on the web). The main challenge at this point is working out our build-measure-assess loop to speed up our experimental turn-around, and that’ll get better as we run more trials. Of course, there are a few things we’ve been pondering at this early conceptual stage: Do we publishing about heterogeneous technology stacks from day 1, or do we start with ASP.NET (which we’re familiar with) & branch out later? There are challenges with either approach. What publishing "modes" are already being well-handled? For example, the likes of Pluralsight, TekPub, and Treehouse have pretty much nailed video training (debate about price, if you like), and unless we think we can do it faster / better / cheaper (unlikely, for the record), we should leave them to it. Where should we base whatever we create? Should we create a completely new asset under a new name, graft something onto Simple-Talk (like the labs), or just build something directly into Simple-Talk? It sounds trivial, but it does have at least some impact on infrastructure and what how we manage the different types of content we (will) have. Are there any obvious problems or niches that we think could address really well, or should we just throw ideas out and see what readers respond to? What kind of users do we want to provide for? This actually deserves a little bit of unpacking… Why are you here? We currently divide readers into (broadly) the categories: Category 1: I know nothing about X, and I’d like to learn about it. Category 2: I know something about X, but I’d like to learn how to do something specific with it. Category 3: Ah man, I have a problem with X, and I need to fix it now. Now that I think about it, I might also include a 4th class of reader: Category 4: I’m looking for something interesting to engage my brain. These are clearly task-based categorizations, and depending on which task you’re performing when you arrive here, you’re going to need different types of content, or will have specific discovery needs. One of the questions that’s at the back of my mind whenever I consider a new idea is “How many of the categories will this satisfy?” As an example, typical video training is very well suited to categories 1, 2, and 4. StackOverflow is very well suited to category 3, and serves as a sign-posting system to the rest. Clearly it’s not necessary to satisfy every category need to be useful and popular, but being aware of what behavior readers might be exhibiting when they arrive will help us tune our ideas appropriately. < / Flashback > We don’t have clean answers to most of these considerations – they’re things we’re aware of, and each idea we look at is going to be best suited to a different mix of the options I’ve described. Our first experimental loop will be coming full circle in the next few days, so we should start to see how the different possibilities vary between ideas. Free to chime in with questions and suggestions about anything I’ve just brain-dumped, or at any stage as we go along. If you see anything that intrigued or enrages you, or just have an idea you’d like to share, I’d love to hear from you.

    Read the article

  • No More NCrunch For Me

    - by Steve Wilkes
    When I opened up Visual Studio this morning, I was greeted with this little popup: NCrunch is a Visual Studio add-in which runs your tests while you work so you know if and when you've broken anything, as well as providing coverage indicators in the IDE and coverage metrics on demand. It recently went commercial (which I thought was fair enough), and time is running out for the free version I've been using for the last couple of months. From my experiences using NCrunch I'm going to let it expire, and go about my business without it. Here's why. Before I start, let me say that I think NCrunch is a good product, which is to say it's had a positive impact on my programming. I've used it to help test-drive a library I'm making right from the start of the project, and especially at the beginning it was very useful to have it run all my tests whenever I made a change. The first problem is that while that was cool to start with, it’s recently become a bit of a chore. Problems Running Tests NCrunch has two 'engine modes' in which it can run tests for you - it can run all your tests when you make a change, or it can figure out which tests were impacted and only run those. Unfortunately, it became clear pretty early on that that second option (which is marked as 'experimental') wasn't really working for me, so I had to have it run everything. With a smallish number of tests and while I was adding new features that was great, but I've now got 445 tests (still not exactly loads) and am more in a 'clean and tidy' mode where I know that a change I'm making will probably only affect a particular subset of the tests. With that in mind it's a bit of a drag sitting there after I make a change and having to wait for NCrunch to run everything. I could disable it and manually run the tests I know are impacted, but then what's the point of having NCrunch? If the 'impacted only' engine mode worked well this problem would go away, but that's not what I found. Secondly, what's wrong with this picture? I've got 445 tests, and NCrunch has queued 455 tests to run. So it's queued duplicate tests - in this quickly-screenshotted case 10, but I've seen the total queue get up over 600. If I'm already itchy waiting for it to run all my tests against a change I know only affects a few, I'm even itchier waiting for it to run a lot of them twice. Problems With Code Coverage NCrunch marks each line of code with a dot to say if it's covered by tests - a black dot says the line isn't covered, a red dot says it's covered but at least one of the covering tests is failing, and a green dot means all the covering tests pass. It also calculates coverage statistics for you. Unfortunately, there's a couple of flaws in the coverage. Firstly, it doesn't support ExcludeFromCodeCoverage attributes. This feature has been requested and I expect will be included in a later release, but right now it doesn't. So this: ...is counted as a non-covered line, and drags your coverage statistics down. Hmph. As well as that, coverage of certain types of code is missed. This: ...is definitely covered. I am 100% absolutely certain it is, by several tests. NCrunch doesn't pick it up, down go my coverage statistics. I've had NCrunch find genuinely uncovered code which I've been able to remove, and that's great, but what's the coverage percentage on this project? Umm... I don't know. Conclusion None of these are major, tool-crippling problems, and I expect NCrunch to get much better in future releases. The current version has some great features, like this: ...that's a line of code with a failing test covering it, and NCrunch can run that failing test and take me to that line exquisitely easily. That's awesome! I'd happily pay for a tool that can do that. But here's the thing: NCrunch (currently) costs $159 (about £100) for a personal licence and $289 (about £180) for a commercial one. I'm not sure which one I'd need as my project is a personal one which I'm intending to open-source, but I'm a professional, self-employed developer, but in any case - that seems like a lot of money for an imperfect tool. If it did everything it's advertised to do more or less perfectly I'd consider it, but it doesn't. So no more NCrunch for me.

    Read the article

  • What's New in Oracle VM VirtualBox 4.2?

    - by Fat Bloke
    A year is a long time in the IT industry. Since the last VirtualBox feature release, which was a little over a year ago, we've seen: new releases of cool new operating systems, such as Windows 8, ChromeOS, and Mountain Lion; we've seen a myriad of new Linux releases from big Enterprise class distributions like Oracle 6.3, to accessible desktop distros like Ubuntu 12.04 and Fedora 17; and we've also seen the spec of a typical PC or laptop double in power. All of these events have influenced our new VirtualBox version which we're releasing today. Here's how... Powerful hosts  One of the trends we've seen is that as the average host platform becomes more powerful, our users are consistently running more and more vm's. Some of our users have large libraries of vm's of various vintages, whilst others have groups of vm's that are run together as an assembly of the various tiers in a multi-tiered software solution, for example, a database tier, middleware tier, and front-ends.  So we're pleased to unveil a more powerful VirtualBox Manager to address the needs of these users: VM Groups Groups allow you to organize your VM library in a sensible way, e.g.  by platform type, by project, by version, by whatever. To create groups you can drag one VM onto another or select one or more VM's and choose Machine...Group from the menu bar. You can expand and collapse groups to save screen real estate, and you can Enter and Leave a group (think iPad navigation here) by using the right and left arrow keys when groups are selected. But groups are more than passive folders, because you can now also perform operations on groups, rather than all the individual VMs. So if you have a multi-tiered solution you can start the whole stack up with just one click. Autostart Many VirtualBox users run dedicated services in their VMs, for example, running a Wiki. With these types of VM workloads, you really want the VM start up when the host machine boots up. So with 4.2 we've introduced a cross-platform Auto-start mechanism to allow you to treat VMs as host services. Headless VM Launching With VM's such as web servers, wikis, and other types of server-class workloads, the Console of the VM is pretty much redundant. For some time now VirtualBox has offered a separate launch mechanism for these VM's, namely the command-line interface commands VBoxHeadless or VBoxManage startvm ... --type headless commands. But with 4.2 we also allow you launch headless VMs from the Manager. Simply hold down Shift when launching the VM from the Manager.  It's that easy. But how do you stop a headless VM? Well, with 4.2 we allow you to Close the VM from the Manager. (BTW best to use the ACPI Shutdown method which allows the guest VM to close down gracefully.) Easy VM Creation For our expert users, the  New VM Wizard was a little tiresome, so now there's a faster 2-click VM creation mode. Just Hide the description when creating a new VM. Powerful VMs  As the hosts have become more powerful, so are the guests that are running inside them. Here are some of the 4.2 features to accommodate them: Virtual Network Interface Cards  With 4.2, it's now possible to create VMs with up to 36 NICs, when using the ICH9 chipset emulation. But with great power comes great responsibility (didn't Obi-Wan say something similar?), and so we have also introduced bandwidth limiting to prevent a rogue VM stealing the whole pipe. VLAN tagging Some of our users leverage VLANs extensively so we've enhanced the E1000 NICs to support this.  Processor Performance If you are running a CPU which supports Nested Paging (aka EPT in the Intel world) such as most of the Core i5 and i7 CPUs, or are running an AMD Bulldozer or later, you should see some performance improvements from our work with these processors. And while we're talking Processors, we've added support for some of the more modern VIA CPUs too. Powerful Automation Because VirtualBox runs atop a fully blown operating system, it makes sense to leverage the capabilities of the host to run scripts that can drive the guest VMs. Guest Automation was introduced in a prior release but with 4.2 we've revamped the APIs to allow a richer and more powerful set of operations to be executed by the guest. Check out the IGuest APIs in the VirtualBox Programming Guide and Reference (SDK). Powerful Platforms  All the hardcore engineering that has gone into 4.2 has been done for a purpose and that is to deliver a fast and powerful engine that can run almost any x86 OS because of the integrity of the virtualization. So we're pleased to add support for these platforms: Mac OS X "Mountain Lion"  Windows 8 Windows Server 2012 Ubuntu 12.04 (“Precise Pangolin”) Fedora 17 Oracle Linux 6.3  Here's the proof: We don't have time to go into the myriad of smaller improvements such as support for burning audio CDs from a guest, bi-directional clipboard control,  drag-and-drop of files into Linux guests, etc. so we'll leave that as an exercise for the user as soon as you've downloaded from the Oracle or community site and taken a peek at the User Guide. So all in all, a pretty solid release, one that we hope you'll enjoy discovering. - FB 

    Read the article

  • how to run mysql drop and create synonym in shell script

    - by bgrif
    I have added this command to a script I am writing and I am running into a issue with it not logging onto mysql and running the commands. How can i fix this and make it run. #! /bin/bash Subject: Please stage the following TFL09143 Locator Bulletin to all TF90 staging environments: # This next section is to go to mysql server and make changes. you can drop and create synonyms truncate a table and insert into a different one. you will be able to verify the counts to the different locations # $ mysql --host=app03-bsi --u "" --p "" "TF90BPS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90BP.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90BP.TFBPS3.BTXSUPB && TRUNCATE TABLE TF90BP.TFBPS3.BTXSUPB SELECT * FROM TF90BP.TFBPS2.BTXSUPB; select count () from TF90BP.TF90.BTXADDR select count() from TF90BPS.TF90.BTXADDR; select count() from TF90BP.TF90.BTXSUPB; select count() from TF90BPS.TF90.BTXSUPB;" $ mysql --host=app03-bsi --u "" --p "" "TF90LMS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90LM.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90LM.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90LM.TFLMS2.BTXADDR;TRUNCATE TABLE TF90LM.TFLMS3.BTXSUPB;INSERT INTO TF90LM.TFLMS3.BTXSUPB SELECT * FROM TF90LM.TFLMS2.BTXSUPB;Verify select count() from TF90LM.TF90.BTXADDR;select count() from TF90LMS.TF90.BTXADDR;select count() from TF90LM.TF90.BTXSUPB;select count() from TF90LMS.TF90.BTXSUPB" $ mysql --host=app03-bsi --u "" --p "" "TF90NCS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90NC.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90NC.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90NC.TFNCS2.BTXADDR; TRUNCATE TABLE TF90NC.TFNCS3.BTXSUPB; INSERT INTO TF90NC.TFNCS3.BTXSUPB SELECT * FROM TF90NC.TFNCS2.BTXSUPB; Verify select count() from TF90NC.TF90.BTXADDR; select count() from TF90NCS.TF90.BTXADDR;select count() from TF90NC.TF90.BTXSUPB;select count() from TF90NCS.TF90.BTXSUPB" $ mysql --host=app03-bsi --u "" --p "" "TF90PVS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90PV.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90PV.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90PV.TFPVS2.BTXADDR;TRUNCATE TABLE TF90PV.TFPVS3.BTXSUPB;INSERT INTO TF90PV.TFPVS3.BTXSUPB SELECT * FROM TF90PV.TFPVS2.BTXSUPB;Verify select count() from TF90PV.TF90.BTXADDR;select count() from TF90PVS.TF90.BTXADDR;select count() from TF90PV.TF90.BTXSUPB;select count() from TF90PVS.TF90.BTXSUPB" TFL09143 Staging cd \ntsrv\common\To\IT-CERT-TEST\TFL09143 #change to mapped network drive cp -p TFL09143.pkg /d:/tf90/code_stg && /tf90bp/code_stg && /tf90lm/code_stg && /tf90pv/code_stg # Copies the package from the networked folder and then copies to the location(s) needed.# InvalidInput="true" if [ $# -eq 0 ] ; then echo "This script sets up TF90 Staging" echo -n "Which production do you want to run? (RB/TaxLocator/Cyclic)" read ProductionDistro else ProductionDistro="$1" fi while [ "$InvalidInput" = "true" ] do if [ "$ProductionDistro" = "RB" -o "$ProductionDistro" = "TaxLocator" -o "$ProductionDistro" = "Cyclic" ] ; then InvalidInput="false" break else echo "You have entered an error" echo "You must type RB or TaxLocator or Cyclic" echo "you typed $ProductionDistro" echo "This script sets up TF90 Staging" read ProductionDistro fi done InvalidInput="true" if [ $# -eq 0 ] ; then echo "This script sets up RB TF90 Staging" echo -n "Which Element do you want to run? (TF90/TF90BP/TF90LM/TF90PV/ALL)" read ElementDistro else ElementDistro="$1" fi while [ "$InvalidInput" = "true" ] do if [ "$ElementDistro" = "TF90" -o "$ElementDistro" = "TF90BP" -o "$ElementDistro" = "TF90LM" -o "$ElementDistro" = "TF90PV" -o "$ElementDistro" = "ALL" ] ; then InvalidInput="false" break else echo "You have entered an error" echo "You must type TF90 or TF90BP or TF90LM or TF90PV" echo "you typed $ElementDistro" echo "This script sets up TF90 Staging" read ElementDistro fi done if [ "$ElementDistro" = "TF90" ] ; then cd /d/tf90/code_stg vim TFL09143.pkg export var=TF90_CONNECT_STRING=DSN=TF90NCS;export Description=TF90NCS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90NCS; export DATASET=DEFAULT pkgintall -l -v ../TFL09143.pkg fi if [ "$ElementDistro" = "$TF90BP" ] ; then cd /d/tf90bp/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90BPS;export Description=TF90BPS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90BPS; start tfloader -l –v ../TFL09143.pkg fi if [ "$ElementDistro" = "$TF90LM" ] ; then cd /d/tf90lm/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90LMS;export Description=TF90LMS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90LMS; start tfloader -l -v ../TFL09143.pkg fi if [ "$ElementDistro" = "TF90PV" ] ; then cd /d/tf90pv/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90PVS;Description=TF90PVS;Trusted_Connection=Yes;WSID=APP03- BSI;DATABASE=TF90PVS; start tfloader -l –v ../TFL09143.pkg fi exit 0

    Read the article

  • WMI Remote Process Starting

    - by Goober
    Scenario I've written a WMI Wrapper that seems to be quite sufficient, however whenever I run the code to start a remote process on a server, I see the process name appear in the task manager but the process itself does not start like it should (as in, I don't see the command line log window of the process that prints out what it's doing etc.) The process I am trying to start is just a C# application executable that I have written. Below is my WMI Wrapper Code and the code I am using to start running the process. Question Is the process actually running? - Even if it is only displaying the process name in the task manager and not actually launching the application to the users window? Code To Start The Process IPHostEntry hostEntry = Dns.GetHostEntry("InsertServerName"); WMIWrapper wrapper = new WMIWrapper("Insert User Name", "Insert Password", hostEntry.HostName); List<Process> processes = wrapper.GetProcesses(); foreach (Process process in processes) { if (process.Caption.Equals("MyAppName.exe")) { Console.WriteLine(process.Caption); Console.WriteLine(process.CommandLine); int processId; wrapper.StartProcess("E:\\MyData\\Data\\MyAppName.exe", out processId); Console.WriteLine(processId.ToString()); } } Console.ReadLine(); WMI Wrapper Code using System; using System.Collections.Generic; using System.Management; using System.Runtime.InteropServices; using Common.WMI.Objects; using System.Net; namespace Common.WMIWrapper { public class WMIWrapper : IDisposable { #region Constructor /// <summary> /// Creates a new instance of the wrapper /// </summary> /// <param jobName="username"></param> /// <param jobName="password"></param> /// <param jobName="server"></param> public WMIWrapper(string server) { Initialise(server); } /// <summary> /// Creates a new instance of the wrapper /// </summary> /// <param jobName="username"></param> /// <param jobName="password"></param> /// <param jobName="server"></param> public WMIWrapper(string username, string password, string server) { Initialise(username, password, server); } #endregion #region Destructor /// <summary> /// Clean up unmanaged references /// </summary> ~WMIWrapper() { Dispose(false); } #endregion #region Initialise /// <summary> /// Initialise the WMI Connection (local machine) /// </summary> /// <param name="server"></param> private void Initialise(string server) { m_server = server; // set connection options m_connectOptions = new ConnectionOptions(); IPHostEntry host = Dns.GetHostEntry(Environment.MachineName); } /// <summary> /// Initialise the WMI connection /// </summary> /// <param jobName="username">Username to connect to server with</param> /// <param jobName="password">Password to connect to server with</param> /// <param jobName="server">Server to connect to</param> private void Initialise(string username, string password, string server) { m_server = server; // set connection options m_connectOptions = new ConnectionOptions(); IPHostEntry host = Dns.GetHostEntry(Environment.MachineName); if (host.HostName.Equals(server, StringComparison.OrdinalIgnoreCase)) return; m_connectOptions.Username = username; m_connectOptions.Password = password; m_connectOptions.Impersonation = ImpersonationLevel.Impersonate; m_connectOptions.EnablePrivileges = true; } #endregion /// <summary> /// Return a list of available wmi namespaces /// </summary> /// <returns></returns> public List<String> GetWMINamespaces() { ManagementScope wmiScope = new ManagementScope(String.Format("\\\\{0}\\root", this.Server), this.ConnectionOptions); List<String> wmiNamespaceList = new List<String>(); ManagementClass wmiNamespaces = new ManagementClass(wmiScope, new ManagementPath("__namespace"), null); ; foreach (ManagementObject ns in wmiNamespaces.GetInstances()) wmiNamespaceList.Add(ns["Name"].ToString()); return wmiNamespaceList; } /// <summary> /// Return a list of available classes in a namespace /// </summary> /// <param jobName="wmiNameSpace">Namespace to get wmi classes for</param> /// <returns>List of classes in the requested namespace</returns> public List<String> GetWMIClassList(string wmiNameSpace) { ManagementScope wmiScope = new ManagementScope(String.Format("\\\\{0}\\root\\{1}", this.Server, wmiNameSpace), this.ConnectionOptions); List<String> wmiClasses = new List<String>(); ManagementObjectSearcher wmiSearcher = new ManagementObjectSearcher(wmiScope, new WqlObjectQuery("SELECT * FROM meta_Class"), null); foreach (ManagementClass wmiClass in wmiSearcher.Get()) wmiClasses.Add(wmiClass["__CLASS"].ToString()); return wmiClasses; } /// <summary> /// Get a list of wmi properties for the specified class /// </summary> /// <param jobName="wmiNameSpace">WMI Namespace</param> /// <param jobName="wmiClass">WMI Class</param> /// <returns>List of properties for the class</returns> public List<String> GetWMIClassPropertyList(string wmiNameSpace, string wmiClass) { List<String> wmiClassProperties = new List<string>(); ManagementClass managementClass = GetWMIClass(wmiNameSpace, wmiClass); foreach (PropertyData property in managementClass.Properties) wmiClassProperties.Add(property.Name); return wmiClassProperties; } /// <summary> /// Returns a list of methods for the class /// </summary> /// <param jobName="wmiNameSpace"></param> /// <param jobName="wmiClass"></param> /// <returns></returns> public List<String> GetWMIClassMethodList(string wmiNameSpace, string wmiClass) { List<String> wmiClassMethods = new List<string>(); ManagementClass managementClass = GetWMIClass(wmiNameSpace, wmiClass); foreach (MethodData method in managementClass.Methods) wmiClassMethods.Add(method.Name); return wmiClassMethods; } /// <summary> /// Retrieve the specified management class /// </summary> /// <param jobName="wmiNameSpace">Namespace of the class</param> /// <param jobName="wmiClass">Type of the class</param> /// <returns></returns> public ManagementClass GetWMIClass(string wmiNameSpace, string wmiClass) { ManagementScope wmiScope = new ManagementScope(String.Format("\\\\{0}\\root\\{1}", this.Server, wmiNameSpace), this.ConnectionOptions); ManagementClass managementClass = null; ManagementObjectSearcher wmiSearcher = new ManagementObjectSearcher(wmiScope, new WqlObjectQuery(String.Format("SELECT * FROM meta_Class WHERE __CLASS = '{0}'", wmiClass)), null); foreach (ManagementClass wmiObject in wmiSearcher.Get()) managementClass = wmiObject; return managementClass; } /// <summary> /// Get an instance of the specficied class /// </summary> /// <param jobName="wmiNameSpace">Namespace of the classes</param> /// <param jobName="wmiClass">Type of the classes</param> /// <returns>Array of management classes</returns> public ManagementObject[] GetWMIClassObjects(string wmiNameSpace, string wmiClass) { ManagementScope wmiScope = new ManagementScope(String.Format("\\\\{0}\\root\\{1}", this.Server, wmiNameSpace), this.ConnectionOptions); List<ManagementObject> wmiClasses = new List<ManagementObject>(); ManagementObjectSearcher wmiSearcher = new ManagementObjectSearcher(wmiScope, new WqlObjectQuery(String.Format("SELECT * FROM {0}", wmiClass)), null); foreach (ManagementObject wmiObject in wmiSearcher.Get()) wmiClasses.Add(wmiObject); return wmiClasses.ToArray(); } /// <summary> /// Get a full list of services /// </summary> /// <returns></returns> public List<Service> GetServices() { return GetService(null); } /// <summary> /// Get a list of services /// </summary> /// <returns></returns> public List<Service> GetService(string name) { ManagementObject[] services = GetWMIClassObjects("CIMV2", "WIN32_Service"); List<Service> serviceList = new List<Service>(); for (int i = 0; i < services.Length; i++) { ManagementObject managementObject = services[i]; Service service = new Service(managementObject); service.Status = (string)managementObject["Status"]; service.Name = (string)managementObject["Name"]; service.DisplayName = (string)managementObject["DisplayName"]; service.PathName = (string)managementObject["PathName"]; service.ProcessId = (uint)managementObject["ProcessId"]; service.Started = (bool)managementObject["Started"]; service.StartMode = (string)managementObject["StartMode"]; service.ServiceType = (string)managementObject["ServiceType"]; service.InstallDate = (string)managementObject["InstallDate"]; service.Description = (string)managementObject["Description"]; service.Caption = (string)managementObject["Caption"]; if (String.IsNullOrEmpty(name) || name.Equals(service.Name, StringComparison.OrdinalIgnoreCase)) serviceList.Add(service); } return serviceList; } /// <summary> /// Get a list of processes /// </summary> /// <returns></returns> public List<Process> GetProcesses() { return GetProcess(null); } /// <summary> /// Get a list of processes /// </summary> /// <returns></returns> public List<Process> GetProcess(uint? processId) { ManagementObject[] processes = GetWMIClassObjects("CIMV2", "WIN32_Process"); List<Process> processList = new List<Process>(); for (int i = 0; i < processes.Length; i++) { ManagementObject managementObject = processes[i]; Process process = new Process(managementObject); process.Priority = (uint)managementObject["Priority"]; process.ProcessId = (uint)managementObject["ProcessId"]; process.Status = (string)managementObject["Status"]; DateTime createDate; if (ConvertFromWmiDate((string)managementObject["CreationDate"], out createDate)) process.CreationDate = createDate.ToString("dd-MMM-yyyy HH:mm:ss"); process.Caption = (string)managementObject["Caption"]; process.CommandLine = (string)managementObject["CommandLine"]; process.Description = (string)managementObject["Description"]; process.ExecutablePath = (string)managementObject["ExecutablePath"]; process.ExecutionState = (string)managementObject["ExecutionState"]; process.MaximumWorkingSetSize = (UInt32?)managementObject ["MaximumWorkingSetSize"]; process.MinimumWorkingSetSize = (UInt32?)managementObject["MinimumWorkingSetSize"]; process.KernelModeTime = (UInt64)managementObject["KernelModeTime"]; process.ThreadCount = (UInt32)managementObject["ThreadCount"]; process.UserModeTime = (UInt64)managementObject["UserModeTime"]; process.VirtualSize = (UInt64)managementObject["VirtualSize"]; process.WorkingSetSize = (UInt64)managementObject["WorkingSetSize"]; if (processId == null || process.ProcessId == processId.Value) processList.Add(process); } return processList; } /// <summary> /// Start the specified process /// </summary> /// <param jobName="commandLine"></param> /// <returns></returns> public bool StartProcess(string command, out int processId) { processId = int.MaxValue; ManagementClass processClass = GetWMIClass("CIMV2", "WIN32_Process"); object[] objectsIn = new object[4]; objectsIn[0] = command; processClass.InvokeMethod("Create", objectsIn); if (objectsIn[3] == null) return false; processId = int.Parse(objectsIn[3].ToString()); return true; } /// <summary> /// Schedule a process on the remote machine /// </summary> /// <param name="command"></param> /// <param name="scheduleTime"></param> /// <param name="jobName"></param> /// <returns></returns> public bool ScheduleProcess(string command, DateTime scheduleTime, out string jobName) { jobName = String.Empty; ManagementClass scheduleClass = GetWMIClass("CIMV2", "Win32_ScheduledJob"); object[] objectsIn = new object[7]; objectsIn[0] = command; objectsIn[1] = String.Format("********{0:00}{1:00}{2:00}.000000+060", scheduleTime.Hour, scheduleTime.Minute, scheduleTime.Second); objectsIn[5] = true; scheduleClass.InvokeMethod("Create", objectsIn); if (objectsIn[6] == null) return false; UInt32 scheduleid = (uint)objectsIn[6]; jobName = scheduleid.ToString(); return true; } /// <summary> /// Returns the current time on the remote server /// </summary> /// <returns></returns> public DateTime Now() { ManagementScope wmiScope = new ManagementScope(String.Format("\\\\{0}\\root\\{1}", this.Server, "CIMV2"), this.ConnectionOptions); ManagementClass managementClass = null; ManagementObjectSearcher wmiSearcher = new ManagementObjectSearcher(wmiScope, new WqlObjectQuery(String.Format("SELECT * FROM Win32_LocalTime")), null); DateTime localTime = DateTime.MinValue; foreach (ManagementObject time in wmiSearcher.Get()) { UInt32 day = (UInt32)time["Day"]; UInt32 month = (UInt32)time["Month"]; UInt32 year = (UInt32)time["Year"]; UInt32 hour = (UInt32)time["Hour"]; UInt32 minute = (UInt32)time["Minute"]; UInt32 second = (UInt32)time["Second"]; localTime = new DateTime((int)year, (int)month, (int)day, (int)hour, (int)minute, (int)second); }; return localTime; } /// <summary> /// Converts a wmi date into a proper date /// </summary> /// <param jobName="wmiDate">Wmi formatted date</param> /// <returns>Date time object</returns> private static bool ConvertFromWmiDate(string wmiDate, out DateTime properDate) { properDate = DateTime.MinValue; string properDateString; // check if string is populated if (String.IsNullOrEmpty(wmiDate)) return false; wmiDate = wmiDate.Trim().ToLower().Replace("*", "0"); string[] months = new string[] { "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec" }; try { properDateString = String.Format("{0}-{1}-{2} {3}:{4}:{5}.{6}", wmiDate.Substring(6, 2), months[int.Parse(wmiDate.Substring(4, 2)) - 1], wmiDate.Substring(0, 4), wmiDate.Substring(8, 2), wmiDate.Substring(10, 2), wmiDate.Substring(12, 2), wmiDate.Substring(15, 6)); } catch (InvalidCastException) { return false; } catch (ArgumentOutOfRangeException) { return false; } // try and parse the new date if (!DateTime.TryParse(properDateString, out properDate)) return false; // true if conversion successful return true; } private bool m_disposed; #region IDisposable Members /// <summary> /// Managed dispose /// </summary> public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } /// <summary> /// Dispose of managed and unmanaged objects /// </summary> /// <param jobName="disposing"></param> public void Dispose(bool disposing) { if (disposing) { m_connectOptions = null; } } #endregion #region Properties private ConnectionOptions m_connectOptions; /// <summary> /// Gets or sets the management scope /// </summary> private ConnectionOptions ConnectionOptions { get { return m_connectOptions; } set { m_connectOptions = value; } } private String m_server; /// <summary> /// Gets or sets the server to connect to /// </summary> public String Server { get { return m_server; } set { m_server = value; } } #endregion } }

    Read the article

  • trouble running smooth animation in thread only when using key listener

    - by heysuse renard
    first time using a forum for coding help so sorry if i post this all wrong. i have more than a few classes i don't think screenManger or core holds the problem but i included them just incase. i got most of this code working through a set of tutorials. but a certain point started trying to do more on my own. i want to play the animation only when i'm moving my sprite. in my KeyTest class i am using threads to run the animation it used to work (poorly) but now not at all pluss it really gunks up my computer. i think it's because of the thread. im new to threads so i'm not to sure if i should even be using one in this situation or if its dangerous for my computer. the animation worked smoothly when i had the sprite bouce around the screen forever. the animation loop played with out stopping. i think the main problem is between the animationThread, Sprite, and keyTest classes, but itcould be more indepth. if someone could point me in the right direction for making the animation run smoothly when i push down a key and stop runing when i let off it would be greatly apriciated. i already looked at this Java a moving animation (sprite) obviously we were doing the same tutorial. but i feel my problem is slightly different. p.s. sorry for the typos. import java.awt.*; import java.awt.event.KeyEvent; import java.awt.event.KeyListener; import java.awt.image.BufferStrategy; import java.awt.image.BufferedImage; import java.util.ArrayList; import javax.swing.ImageIcon; import javax.swing.JFrame; public class KeyTest extends Core implements KeyListener { public static void main(String[] args) { new KeyTest().run(); } Sprite player1; Image hobo; Image background; animation hoboRun; animationThread t1; //init also calls init form superclass public void init() { super.init(); loadImages(); Window w = s.getFullScreenWindow(); w.setFocusTraversalKeysEnabled(false); w.addKeyListener(this); } //load method will go here. //load all pics need for animation and sprite public void loadImages() { background = new ImageIcon("\\\\STUART-PC\\Users\\Stuart\\workspace\\Gaming\\yellow square.jpg").getImage(); Image face1 = new ImageIcon("\\\\STUART-PC\\Users\\Stuart\\workspace\\Gaming\\circle.png").getImage(); Image face2 = new ImageIcon("\\\\STUART-PC\\Users\\Stuart\\workspace\\Gaming\\one eye.png").getImage(); hoboRun = new animation(); hoboRun.addScene(face1, 250); hoboRun.addScene(face2, 250); player1 = new Sprite(hoboRun); this.t1 = new animationThread(); this.t1.setAnimation(player1); } //key pressed public void keyPressed(KeyEvent e) { int keyCode = e.getKeyCode(); if (keyCode == KeyEvent.VK_ESCAPE) { stop(); } if (keyCode == KeyEvent.VK_RIGHT) { player1.setVelocityX(0.3f); try { this.t1.setRunning(true); Thread th1 = new Thread(this.t1); th1.start(); } catch (Exception ex) { System.out.println("noooo"); } } if (keyCode == KeyEvent.VK_LEFT) { player1.setVelocityX(-0.3f); try { this.t1.setRunning(true); Thread th1 = new Thread(this.t1); th1.start(); } catch (Exception ex) { System.out.println("noooo"); } } if (keyCode == KeyEvent.VK_DOWN) { player1.setVelocityY(0.3f); try { this.t1.setRunning(true); Thread th1 = new Thread(this.t1); th1.start(); } catch (Exception ex) { System.out.println("noooo"); } } if (keyCode == KeyEvent.VK_UP) { player1.setVelocityY(-0.3f); try { this.t1.setRunning(true); Thread th1 = new Thread(this.t1);; th1.start(); } catch (Exception ex) { System.out.println("noooo"); } } else { e.consume(); } } //keyReleased @SuppressWarnings("static-access") public void keyReleased(KeyEvent e) { int keyCode = e.getKeyCode(); if (keyCode == KeyEvent.VK_RIGHT || keyCode == KeyEvent.VK_LEFT) { player1.setVelocityX(0); try { this.t1.setRunning(false); } catch (Exception ex) { } } if (keyCode == KeyEvent.VK_UP || keyCode == KeyEvent.VK_DOWN) { player1.setVelocityY(0); try { this.t1.setRunning(false); } catch (Exception ex) { } } else { e.consume(); } } //last method from interface public void keyTyped(KeyEvent e) { e.consume(); } //draw public void draw(Graphics2D g) { Window w = s.getFullScreenWindow(); g.setColor(w.getBackground()); g.fillRect(0, 0, s.getWidth(), s.getHieght()); g.setColor(w.getForeground()); g.drawImage(player1.getImage(), Math.round(player1.getX()), Math.round(player1.getY()), null); } public void update(long timePassed) { player1.update(timePassed); } } abstract class Core { private static DisplayMode modes[] = { new DisplayMode(1600, 900, 64, 0), new DisplayMode(800, 600, 32, 0), new DisplayMode(800, 600, 24, 0), new DisplayMode(800, 600, 16, 0), new DisplayMode(800, 480, 32, 0), new DisplayMode(800, 480, 24, 0), new DisplayMode(800, 480, 16, 0),}; private boolean running; protected ScreenManager s; //stop method public void stop() { running = false; } public void run() { try { init(); gameLoop(); } finally { s.restoreScreen(); } } //set to full screen //set current background here public void init() { s = new ScreenManager(); DisplayMode dm = s.findFirstCompatibleMode(modes); s.setFullScreen(dm); Window w = s.getFullScreenWindow(); w.setFont(new Font("Arial", Font.PLAIN, 20)); w.setBackground(Color.GREEN); w.setForeground(Color.WHITE); running = true; } //main gameLoop public void gameLoop() { long startTime = System.currentTimeMillis(); long cumTime = startTime; while (running) { long timePassed = System.currentTimeMillis() - cumTime; cumTime += timePassed; update(timePassed); Graphics2D g = s.getGraphics(); draw(g); g.dispose(); s.update(); try { Thread.sleep(20); } catch (Exception ex) { } } } //update animation public void update(long timePassed) { } //draws to screen abstract void draw(Graphics2D g); } class animationThread implements Runnable { String name; boolean playing; Sprite a; //constructor takes input from keyboard public animationThread() { } //The run method for animation public void run() { long startTime = System.currentTimeMillis(); long cumTime = startTime; boolean test = getRunning(); while (test) { long timePassed = System.currentTimeMillis() - cumTime; cumTime += timePassed; test = getRunning(); } } public String getName() { return name; } public void setAnimation(Sprite a) { this.a = a; } public void setName(String name) { this.name = name; } public void setRunning(boolean running) { this.playing = running; } public boolean getRunning() { return playing; } } class animation { private ArrayList scenes; private int sceneIndex; private long movieTime; private long totalTime; //constructor public animation() { scenes = new ArrayList(); totalTime = 0; start(); } //add scene to ArrayLisy and set time for each scene public synchronized void addScene(Image i, long t) { totalTime += t; scenes.add(new OneScene(i, totalTime)); } public synchronized void start() { movieTime = 0; sceneIndex = 0; } //change scenes public synchronized void update(long timePassed) { if (scenes.size() > 1) { movieTime += timePassed; if (movieTime >= totalTime) { movieTime = 0; sceneIndex = 0; } while (movieTime > getScene(sceneIndex).endTime) { sceneIndex++; } } } //get animations current scene(aka image) public synchronized Image getImage() { if (scenes.size() == 0) { return null; } else { return getScene(sceneIndex).pic; } } //get scene private OneScene getScene(int x) { return (OneScene) scenes.get(x); } //Private Inner CLASS////////////// private class OneScene { Image pic; long endTime; public OneScene(Image pic, long endTime) { this.pic = pic; this.endTime = endTime; } } } class Sprite { private animation a; private float x; private float y; private float vx; private float vy; //Constructor public Sprite(animation a) { this.a = a; } //change position public void update(long timePassed) { x += vx * timePassed; y += vy * timePassed; } public void startAnimation(long timePassed) { a.update(timePassed); } //get x position public float getX() { return x; } //get y position public float getY() { return y; } //set x public void setX(float x) { this.x = x; } //set y public void setY(float y) { this.y = y; } //get sprite width public int getWidth() { return a.getImage().getWidth(null); } //get sprite height public int getHeight() { return a.getImage().getHeight(null); } //get horizontal velocity public float getVelocityX() { return vx; } //get vertical velocity public float getVelocityY() { return vx; } //set horizontal velocity public void setVelocityX(float vx) { this.vx = vx; } //set vertical velocity public void setVelocityY(float vy) { this.vy = vy; } //get sprite / image public Image getImage() { return a.getImage(); } } class ScreenManager { private GraphicsDevice vc; public ScreenManager() { GraphicsEnvironment e = GraphicsEnvironment.getLocalGraphicsEnvironment(); vc = e.getDefaultScreenDevice(); } //get all compatible DM public DisplayMode[] getCompatibleDisplayModes() { return vc.getDisplayModes(); } //compares DM passed into vc DM and see if they match public DisplayMode findFirstCompatibleMode(DisplayMode modes[]) { DisplayMode goodModes[] = vc.getDisplayModes(); for (int x = 0; x < modes.length; x++) { for (int y = 0; y < goodModes.length; y++) { if (displayModesMatch(modes[x], goodModes[y])) { return modes[x]; } } } return null; } //get current DM public DisplayMode getCurrentDisplayMode() { return vc.getDisplayMode(); } //checks if two modes match each other public boolean displayModesMatch(DisplayMode m1, DisplayMode m2) { if (m1.getWidth() != m2.getWidth() || m1.getHeight() != m2.getHeight()) { return false; } if (m1.getBitDepth() != DisplayMode.BIT_DEPTH_MULTI && m2.getBitDepth() != DisplayMode.BIT_DEPTH_MULTI && m1.getBitDepth() != m2.getBitDepth()) { return false; } if (m1.getRefreshRate() != DisplayMode.REFRESH_RATE_UNKNOWN && m2.getRefreshRate() != DisplayMode.REFRESH_RATE_UNKNOWN && m1.getRefreshRate() != m2.getRefreshRate()) { return false; } return true; } //make frame full screen public void setFullScreen(DisplayMode dm) { JFrame f = new JFrame(); f.setUndecorated(true); f.setIgnoreRepaint(true); f.setResizable(false); vc.setFullScreenWindow(f); if (dm != null && vc.isDisplayChangeSupported()) { try { vc.setDisplayMode(dm); } catch (Exception ex) { } } f.createBufferStrategy(2); } //sets graphics object = this return public Graphics2D getGraphics() { Window w = vc.getFullScreenWindow(); if (w != null) { BufferStrategy s = w.getBufferStrategy(); return (Graphics2D) s.getDrawGraphics(); } else { return null; } } //updates display public void update() { Window w = vc.getFullScreenWindow(); if (w != null) { BufferStrategy s = w.getBufferStrategy(); if (!s.contentsLost()) { s.show(); } } } //returns full screen window public Window getFullScreenWindow() { return vc.getFullScreenWindow(); } //get width of window public int getWidth() { Window w = vc.getFullScreenWindow(); if (w != null) { return w.getWidth(); } else { return 0; } } //get height of window public int getHieght() { Window w = vc.getFullScreenWindow(); if (w != null) { return w.getHeight(); } else { return 0; } } //get out of full screen public void restoreScreen() { Window w = vc.getFullScreenWindow(); if (w != null) { w.dispose(); } vc.setFullScreenWindow(null); } //create image compatible with monitor public BufferedImage createCopatibleImage(int w, int h, int t) { Window win = vc.getFullScreenWindow(); if (win != null) { GraphicsConfiguration gc = win.getGraphicsConfiguration(); return gc.createCompatibleImage(w, h, t); } return null; } }

    Read the article

  • Merging multiple Google calendar feeds into one JSON object in javascript

    - by Jeramy
    I am trying to bring in the JSON feeds from multiple Google calendars so that I can sort the upcoming events and display the next X number in an "Upcoming Events" list. I have this working with Yahoo! pipes but I want to get away from using a 3rd party to aggregate. I think I am close, but I cannot get the JSON objects created correctly. I am getting the data into the array but not in JSON format, so I can't manipulate it. I have tried var myJsonString = JSON.stringify(JSONData); using https://github.com/douglascrockford/JSON-js but that just threw errors. I suspect because my variable is in the wrong starting format. I have tried just calling the feed like: $.getJSON(url); and creating a function concant1() to do the JSONData=JSONData.concat(data);, but it doesn't fire and I think it would produce the same end result anyway. I have also tried several other methods of getting the end result I want with varying degrees of doom. Here is the closest I have come so far: var JSONData = new Array(); var urllist = ["https://www.google.com/calendar/feeds/dg61asqgqg4pust2l20obgdl64%40group.calendar.google.com/public/full?orderby=starttime&max-results=3&sortorder=ascending&futureevents=true&ctz=America/New_York&singleevents=true&alt=json&callback=concant1","https://www.google.com/calendar/feeds/5oc3kvp7lnu5rd4krg2skcu2ng%40group.calendar.google.com/public/full?orderby=starttime&max-results=3&sortorder=ascending&futureevents=true&ctz=America/New_York&singleevents=true&alt=json&callback=concant1","http://www.google.com/calendar/feeds/rine4umu96kl6t46v4fartnho8%40group.calendar.google.com/public/full?orderby=starttime&max-results=3&sortorder=ascending&futureevents=true&ctz=America/New_York&singleevents=true&alt=json&callback=concant1"]; urllist.forEach(function addFeed(url){ alert("The URL being used: "+ url); if (void 0 != JSONData){JSONData=JSONData.concat($.getJSON(url));} else{JSONData = $.getJSON(url);} alert("The count from concantonated JSONData: "+JSONData.length); }); document.write("The final count from JSONData: "+JSONData.length+"<p>"); console.log(JSONData) UPDATE: Now with full working source!! :) If anyone would like to make suggestions on how to improve the code's efficiency it would be gratefully accepted. I hope others find this useful.: // GCal MFA - Google Calendar Multiple Feed Aggregator // Useage: GCalMFA(CIDs,n); // Where 'CIDs' is a list of comma seperated Google calendar IDs in the format: [email protected], and 'n' is the number of results to display. // While the contained console.log(); outputs are really handy for testing, you will probably waant to remove them for regular usage // Author: Jeramy Kruser - http://jeramy.kruser.me //onerror=function (d, f, g){alert (d+ "\n"+ f+ "\n");} if (!window.console) {console = {log: function() {}};} document.body.className += ' js-enabled'; // Global variables var urllist = []; var maxResults = 3; // The default is 3 results unless a value is sent var JSONData = {}; var eventCount = 0; var errorLog = ""; JSONData = { count: 0, value : { description: "Aggregates multiple Google calendar feeds into a single sorted list", generator: "StackOverflow communal coding - Thanks for the assist Patrick M", website: "http://jeramy.kruser.me", author: "Jeramy & Kasey Kruser", items: [] }}; // For putting dates from feed into a format that can be read by the Date function for calculating event length. function parse (str) { // validate year as 4 digits, month as 01-12, and day as 01-31 str = str.match (/^(\d{4})(0[1-9]|1[0-2])(0[1-9]|[12]\d|3[01])$/); if (str) { // make a date str[0] = new Date ( + str[1], + str[2] - 1, + str[3]); // check if month stayed the same (ie that day number is valid) if (str[0].getMonth () === + str[2] - 1) { return str[0]; } } return undefined; } //For outputting to HTML function output() { var months, day_in_ms, summary, i, item, eventlink, title, calendar, where,dtstart, dtend, endyear, endmonth, endday, startyear, startmonth, startday, endmonthdayyear, eventlinktitle, startmonthday, length, curtextval, k; // Array of month names from numbers for page display. months = {'0':'January', '1':'February', '2':'March', '3':'April', '4':'May', '5':'June', '6':'July', '7':'August', '8':'September', '9':'October', '10':'November', '11':'December'}; // For use in calculating event length. day_in_ms = 24 * 60 * 60 * 1000; // Instantiate HTML Arrays. summary = []; for (i = 0; i < maxResults; i+=1 ) { //console.log("i: "+i+" < "+"maxResults: "+ maxResults); if (!(JSONData.value.items[i] === undefined)) { item = JSONData.value.items[i]; // Grabbing data for each event in the feed. eventlink = item.link[0]; title = item.title.$t; // Only display the calendar title if there is more than one calendar = ""; if (urllist.length > 1) { calendar = '<br />Calendar: <a href="https://www.google.com/calendar/embed?src=' + item.gd$who[0].email + '&ctz=America/New_York">' + item.author[0].name.$t + '<\/a> (<a href="https://www.google.com/calendar/ical/' + item.gd$who[0].email + '/public/basic.ics">iCal<\/a>)'; } // Grabbing event location, if entered. if ( item.gd$where[0].valueString !== "" ) { where = '<br />' + (item.gd$where[0].valueString); } else { where = (""); } // Grabbing start date and putting in form YYYYmmdd. Subtracting one day from dtend to fix Google's habit of ending an all-day event at midnight on the following day. dtstart = new Date(parse(((item.gd$when[0].startTime).substring(0,10)).replace(/-/g,""))); dtend = new Date(parse(((item.gd$when[0].endTime).substring(0,10)).replace(/-/g,"")) - day_in_ms); // Put dates in pretty form for display. endyear = dtend.getFullYear(); endmonth = months[dtend.getMonth()]; endday = dtend.getDate(); startyear = dtstart.getFullYear(); startmonth = months[dtstart.getMonth()]; startday = dtstart.getDate(); //consolidate some much-used variables for HTML output. endmonthdayyear = endmonth + ' ' + endday + ', ' + endyear; eventlinktitle = '<a href="' + eventlink + '">' + title + '<\/a>'; startmonthday = startmonth + ' ' + startday; // Calculates the number of days between each event's start and end dates. length = ((dtend - dtstart) / day_in_ms); // HTML for each event, depending on which div is available on the page (different HTML applies). Only one div can exist on any one page. if (document.getElementById("homeCalendar") !== null ) { // If the length of the event is greater than 0 days, show start and end dates. if ( length > 0 && startmonth !== endmonth && startday === endday ) { summary[i] = ('<h3>' + eventlink + '">' + startmonthday + ', ' + startyear + ' - ' + endmonthdayyear + '<\/a><\/h3><p>' + title + '<\/p>'); } // If the length of the event is greater than 0 and begins and ends within the same month, shorten the date display. else if ( length > 0 && startmonth === endmonth && startyear === endyear ) { summary[i] = ('<h3><a href="' + eventlink + '">' + startmonthday + '-' + endday + ', ' + endyear + '<\/a><\/h3><p>' + title + '<\/p>'); } // If the length of the event is greater than 0 and begins and ends within different months of the same year, shorten the date display. else if ( length > 0 && startmonth !== endmonth && startyear === endyear ) { summary[i] = ('<h3><a href="' + eventlink + '">' + startmonthday + ' - ' + endmonthdayyear + '<\/a><\/h3><p>' + title + '<\/p>'); } // If the length of the event is less than one day (length < = 0), show only the start date. else { summary[i] = ('<h3><a href="' + eventlink + '">' + startmonthday + ', ' + startyear + '<\/a><\/h3><p>' + title + '<\/p>'); } } else if (document.getElementById("allCalendar") !== null ) { // If the length of the event is greater than 0 days, show start and end dates. if ( length > 0 && startmonth !== endmonth && startday === endday ) { summary[i] = ('<li>' + eventlinktitle + '<br />' + startmonthday + ', ' + startyear + ' - ' + endmonthdayyear + where + calendar + '<br />&#160;<\/li>'); } // If the length of the event is greater than 0 and begins and ends within the same month, shorten the date display. else if ( length > 0 && startmonth === endmonth && startyear === endyear ) { summary[i] = ('<li>' + eventlinktitle + '<br />' + startmonthday + '-' + endday + ', ' + endyear + where + calendar + '<br />&#160;<\/li>'); } // If the length of the event is greater than 0 and begins and ends within different months of the same year, shorten the date display. else if ( length > 0 && startmonth !== endmonth && startyear === endyear ) { summary[i] = ('<li>' + eventlinktitle + '<br />' + startmonthday + ' - ' + endmonthdayyear + where + calendar + '<br />&#160;<\/li>'); } // If the length of the event is less than one day (length < = 0), show only the start date. else { summary[i] = ('<li>' + eventlinktitle + '<br />' + startmonthday + ', ' + startyear + where + calendar + '<br />&#160;<\/li>'); } } } if (summary[i] === undefined) { summary[i] = "";} //console.log(summary[i]); } console.log(JSONData); // Puts the HTML into the div with the appropriate id. Each page can have only one. if (document.getElementById("homeCalendar") !== null ) { curtextval = document.getElementById("homeCalendar"); console.log("homeCalendar: "+curtextval); } else if (document.getElementById("oneCalendar") !== null ) { curtextval = document.getElementById("oneCalendar"); console.log("oneCalendar: "+curtextval); } else if (document.getElementById("allCalendar") !== null ) { curtextval = document.getElementById("allCalendar"); console.log("allCalendar: "+curtextval); } if (curtextval.innerHTML.length < 100) { errorLog += '<div id="noEvents">No events found.</div>'; } for (k = 0; k<maxResults; k+=1 ) { curtextval.innerHTML = curtextval.innerHTML + summary[k]; } if (eventCount === 0) { errorLog += '<div id="noEvents">No events found.</div>'; } if (document.getElementById("homeCalendar") === null ) { curtextval.innerHTML = '<ul>' + curtextval.innerHTML + '<\/ul>'; } if (errorLog !== "") { curtextval.innerHTML += errorLog; } } // For taking in each feed, breaking out the events and sorting them into the object by date function sortFeed(event) { var tempEntry, i; tempEntry = event; i = 0; console.log("*** New incoming event object #"+eventCount+" ***"); console.log(event.title.$t); console.log(event); //console.log("i = " + i + " and maxResults " + maxResults); while(i<maxResults) { console.log("i = " + i + " < maxResults " + maxResults); console.log("Sorting event = " + event.title.$t + " by date of " + event.gd$when[0].startTime.substring(0,10).replace(/-/g,"")); if (JSONData.value.items[i]) { console.log("JSONData.value.items[" + i + "] exists and has a startTime of " + JSONData.value.items[i].gd$when[0].startTime.substring(0,10).replace(/-/g,"")); if (event.gd$when[0].startTime.substring(0,10).replace(/-/g,"")<JSONData.value.items[i].gd$when[0].startTime.substring(0,10).replace(/-/g,"")) { console.log("The incoming event value of " + event.gd$when[0].startTime.substring(0,10).replace(/-/g,"") + " is < " + JSONData.value.items[i].gd$when[0].startTime.substring(0,10).replace(/-/g,"")); tempEntry = JSONData.value.items[i]; console.log("Existing JSONData.value.items[" + i + "] value " + JSONData.value.items[i].gd$when[0].startTime.substring(0,10).replace(/-/g,"") + " stored in tempEntry"); JSONData.value.items[i] = event; console.log("Position JSONData.value.items[" + i + "] set to new value: " + event.gd$when[0].startTime.substring(0,10).replace(/-/g,"")); event = tempEntry; console.log("Now sorting event = " + event.title.$t + " by date of " + event.gd$when[0].startTime.substring(0,10).replace(/-/g,"")); } else { console.log("The incoming event value of " + event.gd$when[0].startTime.substring(0,10).replace(/-/g,"") + " is > " + JSONData.value.items[i].gd$when[0].startTime.substring(0,10).replace(/-/g,"") + " moving on..."); } } else { JSONData.value.items[i] = event; console.log("JSONData.value.items[" + i + "] does not exist so it was set to the Incoming value of " + event.gd$when[0].startTime.substring(0,10).replace(/-/g,"")); i = maxResults; } i += 1; } } // For completing the aggregation function complete(result) { var str, j, item; // Track the number of calls completed back, we're not done until all URLs have processed if( complete.count === undefined ){ complete.count = urllist.length; } console.log("complete.count = "+complete.count); console.log(result.feed); if(result.feed.entry){ JSONData.count = maxResults; // Check each incoming item against JSONData.value.items console.log("*** Begin Sorting " + result.feed.entry.length + " Events ***"); //console.log(result.feed.entry); result.feed.entry.forEach( function(event){ eventCount += 1; sortFeed(event); } ); } if( (complete.count-=1)<1 ) { console.log("*** Done Sorting ***"); output(); } } // This is the main function. It takes in the list of Calendar IDs and the number of results to display function GCalMFA(list,results){ var i, calPreProperties, calPostProperties1, calPostProperties2; calPreProperties = "https://www.google.com/calendar/feeds/"; calPostProperties1 = "/public/full?max-results="; calPostProperties2 = "&orderby=starttime&sortorder=ascending&futureevents=true&ctz=America/New_York&singleevents=true&alt=json&callback=?"; if (list) { if (results) { maxResults = results; } urllist = list.split(','); for (i = 0; i < urllist.length; i+=1 ){ if (urllist[i] === 0){ urllist.splice(i,1);} else{ urllist[i] = calPreProperties + urllist[i] + calPostProperties1+maxResults+calPostProperties2;} } console.log("There are " + urllist.length + " URLs"); urllist.forEach(function addFeed(url){ $.getJSON(url, complete); }); } else { errorLog += '<div id="noURLs">No calendars have been selected.</div>'; output(); } }

    Read the article

  • opengl problem works on droid but not droid eris and others.

    - by nathan
    This GlRenderer works fine on the moto droid, but does not work well at all on droid eris or other android phones does anyone know why? package com.ntu.way2fungames.spacehockeybase; import java.io.DataInputStream; import java.io.IOException; import java.nio.Buffer; import java.nio.FloatBuffer; import javax.microedition.khronos.egl.EGLConfig; import javax.microedition.khronos.opengles.GL10; import com.ntu.way2fungames.LoadFloatArray; import com.ntu.way2fungames.OGLTriReader; import android.content.res.AssetManager; import android.content.res.Resources; import android.opengl.GLU; import android.opengl.GLSurfaceView.Renderer; import android.os.Handler; import android.os.Message; public class GlRenderer extends Thread implements Renderer { private float drawArray[]; private float yoff; private float yoff2; private long lastRenderTime; private float[] yoffs= new float[10]; int Width; int Height; private float[] pixelVerts = new float[] { +.0f,+.0f,2, +.5f,+.5f,0, +.5f,-.5f,0, +.0f,+.0f,2, +.5f,-.5f,0, -.5f,-.5f,0, +.0f,+.0f,2, -.5f,-.5f,0, -.5f,+.5f,0, +.0f,+.0f,2, -.5f,+.5f,0, +.5f,+.5f,0, }; @Override public void run() { } private float[] arenaWalls = new float[] { 8.00f,2.00f,1f,2f,2f,1f,2.00f,8.00f,1f,8.00f,2.00f,1f,2.00f,8.00f,1f,8.00f,8.00f,1f, 2.00f,8.00f,1f,2f,2f,1f,0.00f,0.00f,0f,2.00f,8.00f,1f,0.00f,0.00f,0f,0.00f,10.00f,0f, 8.00f,8.00f,1f,2.00f,8.00f,1f,0.00f,10.00f,0f,8.00f,8.00f,1f,0.00f,10.00f,0f,10.00f,10.00f,0f, 2f,2f,1f,8.00f,2.00f,1f,10.00f,0.00f,0f,2f,2f,1f,10.00f,0.00f,0f,0.00f,0.00f,0f, 8.00f,2.00f,1f,8.00f,8.00f,1f,10.00f,10.00f,0f,8.00f,2.00f,1f,10.00f,10.00f,0f,10.00f,0.00f,0f, 10.00f,10.00f,0f,0.00f,10.00f,0f,0.00f,0.00f,0f,10.00f,10.00f,0f,0.00f,0.00f,0f,10.00f,0.00f,0f, 8.00f,6.00f,1f,8.00f,4.00f,1f,122f,4.00f,1f,8.00f,6.00f,1f,122f,4.00f,1f,122f,6.00f,1f, 8.00f,6.00f,1f,122f,6.00f,1f,120f,7.00f,0f,8.00f,6.00f,1f,120f,7.00f,0f,10.00f,7.00f,0f, 122f,4.00f,1f,8.00f,4.00f,1f,10.00f,3.00f,0f,122f,4.00f,1f,10.00f,3.00f,0f,120f,3.00f,0f, 480f,10.00f,0f,470f,10.00f,0f,470f,0.00f,0f,480f,10.00f,0f,470f,0.00f,0f,480f,0.00f,0f, 478f,2.00f,1f,478f,8.00f,1f,480f,10.00f,0f,478f,2.00f,1f,480f,10.00f,0f,480f,0.00f,0f, 472f,2f,1f,478f,2.00f,1f,480f,0.00f,0f,472f,2f,1f,480f,0.00f,0f,470f,0.00f,0f, 478f,8.00f,1f,472f,8.00f,1f,470f,10.00f,0f,478f,8.00f,1f,470f,10.00f,0f,480f,10.00f,0f, 472f,8.00f,1f,472f,2f,1f,470f,0.00f,0f,472f,8.00f,1f,470f,0.00f,0f,470f,10.00f,0f, 478f,2.00f,1f,472f,2f,1f,472f,8.00f,1f,478f,2.00f,1f,472f,8.00f,1f,478f,8.00f,1f, 478f,846f,1f,472f,846f,1f,472f,852f,1f,478f,846f,1f,472f,852f,1f,478f,852f,1f, 472f,852f,1f,472f,846f,1f,470f,844f,0f,472f,852f,1f,470f,844f,0f,470f,854f,0f, 478f,852f,1f,472f,852f,1f,470f,854f,0f,478f,852f,1f,470f,854f,0f,480f,854f,0f, 472f,846f,1f,478f,846f,1f,480f,844f,0f,472f,846f,1f,480f,844f,0f,470f,844f,0f, 478f,846f,1f,478f,852f,1f,480f,854f,0f,478f,846f,1f,480f,854f,0f,480f,844f,0f, 480f,854f,0f,470f,854f,0f,470f,844f,0f,480f,854f,0f,470f,844f,0f,480f,844f,0f, 10.00f,854f,0f,0.00f,854f,0f,0.00f,844f,0f,10.00f,854f,0f,0.00f,844f,0f,10.00f,844f,0f, 8.00f,846f,1f,8.00f,852f,1f,10.00f,854f,0f,8.00f,846f,1f,10.00f,854f,0f,10.00f,844f,0f, 2f,846f,1f,8.00f,846f,1f,10.00f,844f,0f,2f,846f,1f,10.00f,844f,0f,0.00f,844f,0f, 8.00f,852f,1f,2.00f,852f,1f,0.00f,854f,0f,8.00f,852f,1f,0.00f,854f,0f,10.00f,854f,0f, 2.00f,852f,1f,2f,846f,1f,0.00f,844f,0f,2.00f,852f,1f,0.00f,844f,0f,0.00f,854f,0f, 8.00f,846f,1f,2f,846f,1f,2.00f,852f,1f,8.00f,846f,1f,2.00f,852f,1f,8.00f,852f,1f, 6f,846f,1f,4f,846f,1f,4f,8f,1f,6f,846f,1f,4f,8f,1f,6f,8f,1f, 6f,846f,1f,6f,8f,1f,7f,10f,0f,6f,846f,1f,7f,10f,0f,7f,844f,0f, 4f,8f,1f,4f,846f,1f,3f,844f,0f,4f,8f,1f,3f,844f,0f,3f,10f,0f, 474f,8f,1f,474f,846f,1f,473f,844f,0f,474f,8f,1f,473f,844f,0f,473f,10f,0f, 476f,846f,1f,476f,8f,1f,477f,10f,0f,476f,846f,1f,477f,10f,0f,477f,844f,0f, 476f,846f,1f,474f,846f,1f,474f,8f,1f,476f,846f,1f,474f,8f,1f,476f,8f,1f, 130f,10.00f,0f,120f,10.00f,0f,120f,0.00f,0f,130f,10.00f,0f,120f,0.00f,0f,130f,0.00f,0f, 128f,2.00f,1f,128f,8.00f,1f,130f,10.00f,0f,128f,2.00f,1f,130f,10.00f,0f,130f,0.00f,0f, 122f,2f,1f,128f,2.00f,1f,130f,0.00f,0f,122f,2f,1f,130f,0.00f,0f,120f,0.00f,0f, 128f,8.00f,1f,122f,8.00f,1f,120f,10.00f,0f,128f,8.00f,1f,120f,10.00f,0f,130f,10.00f,0f, 122f,8.00f,1f,122f,2f,1f,120f,0.00f,0f,122f,8.00f,1f,120f,0.00f,0f,120f,10.00f,0f, 128f,2.00f,1f,122f,2f,1f,122f,8.00f,1f,128f,2.00f,1f,122f,8.00f,1f,128f,8.00f,1f, 352f,8.00f,1f,358f,8.00f,1f,358f,2.00f,1f,352f,8.00f,1f,358f,2.00f,1f,352f,2.00f,1f, 358f,2.00f,1f,358f,8.00f,1f,360f,10.00f,0f,358f,2.00f,1f,360f,10.00f,0f,360f,0.00f,0f, 352f,2.00f,1f,358f,2.00f,1f,360f,0.00f,0f,352f,2.00f,1f,360f,0.00f,0f,350f,0.00f,0f, 358f,8.00f,1f,352f,8.00f,1f,350f,10.00f,0f,358f,8.00f,1f,350f,10.00f,0f,360f,10.00f,0f, 352f,8.00f,1f,352f,2.00f,1f,350f,0.00f,0f,352f,8.00f,1f,350f,0.00f,0f,350f,10.00f,0f, 350f,0.00f,0f,360f,0.00f,0f,360f,10.00f,0f,350f,0.00f,0f,360f,10.00f,0f,350f,10.00f,0f, 358f,6.00f,1f,472f,6.00f,1f,470f,7.00f,0f,358f,6.00f,1f,470f,7.00f,0f,360f,7.00f,0f, 472f,4.00f,1f,358f,4.00f,1f,360f,3.00f,0f,472f,4.00f,1f,360f,3.00f,0f,470f,3.00f,0f, 472f,4.00f,1f,472f,6.00f,1f,358f,6.00f,1f,472f,4.00f,1f,358f,6.00f,1f,358f,4.00f,1f, 472f,848f,1f,472f,850f,1f,358f,850f,1f,472f,848f,1f,358f,850f,1f,358f,848f,1f, 472f,848f,1f,358f,848f,1f,360f,847f,0f,472f,848f,1f,360f,847f,0f,470f,847f,0f, 358f,850f,1f,472f,850f,1f,470f,851f,0f,358f,850f,1f,470f,851f,0f,360f,851f,0f, 350f,844f,0f,360f,844f,0f,360f,854f,0f,350f,844f,0f,360f,854f,0f,350f,854f,0f, 352f,852f,1f,352f,846f,1f,350f,844f,0f,352f,852f,1f,350f,844f,0f,350f,854f,0f, 358f,852f,1f,352f,852f,1f,350f,854f,0f,358f,852f,1f,350f,854f,0f,360f,854f,0f, 352f,846f,1f,358f,846f,1f,360f,844f,0f,352f,846f,1f,360f,844f,0f,350f,844f,0f, 358f,846f,1f,358f,852f,1f,360f,854f,0f,358f,846f,1f,360f,854f,0f,360f,844f,0f, 352f,852f,1f,358f,852f,1f,358f,846f,1f,352f,852f,1f,358f,846f,1f,352f,846f,1f, 128f,846f,1f,122f,846f,1f,122f,852f,1f,128f,846f,1f,122f,852f,1f,128f,852f,1f, 122f,852f,1f,122f,846f,1f,120f,844f,0f,122f,852f,1f,120f,844f,0f,120f,854f,0f, 128f,852f,1f,122f,852f,1f,120f,854f,0f,128f,852f,1f,120f,854f,0f,130f,854f,0f, 122f,846f,1f,128f,846f,1f,130f,844f,0f,122f,846f,1f,130f,844f,0f,120f,844f,0f, 128f,846f,1f,128f,852f,1f,130f,854f,0f,128f,846f,1f,130f,854f,0f,130f,844f,0f, 130f,854f,0f,120f,854f,0f,120f,844f,0f,130f,854f,0f,120f,844f,0f,130f,844f,0f, 122f,848f,1f,8f,848f,1f,10f,847f,0f,122f,848f,1f,10f,847f,0f,120f,847f,0f, 8f,850f,1f,122f,850f,1f,120f,851f,0f,8f,850f,1f,120f,851f,0f,10f,851f,0f, 8f,850f,1f,8f,848f,1f,122f,848f,1f,8f,850f,1f,122f,848f,1f,122f,850f,1f, 10f,847f,0f,120f,847f,0f,124.96f,829.63f,-0.50f,10f,847f,0f,124.96f,829.63f,-0.50f,19.51f,829.63f,-0.50f, 130f,844f,0f,130f,854f,0f,134.55f,836.34f,-0.50f,130f,844f,0f,134.55f,836.34f,-0.50f,134.55f,826.76f,-0.50f, 350f,844f,0f,350f,854f,0f,345.45f,836.34f,-0.50f,350f,844f,0f,345.45f,836.34f,-0.50f,345.45f,826.76f,-0.50f, 360f,847f,0f,470f,847f,0f,460.49f,829.63f,-0.50f,360f,847f,0f,460.49f,829.63f,-0.50f,355.04f,829.63f,-0.50f, 470f,7.00f,0f,360f,7.00f,0f,355.04f,24.37f,-0.50f,470f,7.00f,0f,355.04f,24.37f,-0.50f,460.49f,24.37f,-0.50f, 350f,10.00f,0f,350f,0.00f,0f,345.45f,17.66f,-0.50f,350f,10.00f,0f,345.45f,17.66f,-0.50f,345.45f,27.24f,-0.50f, 130f,10.00f,0f,130f,0.00f,0f,134.55f,17.66f,-0.50f,130f,10.00f,0f,134.55f,17.66f,-0.50f,134.55f,27.24f,-0.50f, 473f,844f,0f,473f,10f,0f,463.36f,27.24f,-0.50f,473f,844f,0f,463.36f,27.24f,-0.50f,463.36f,826.76f,-0.50f, 7f,10f,0f,7f,844f,0f,16.64f,826.76f,-0.50f,7f,10f,0f,16.64f,826.76f,-0.50f,16.64f,27.24f,-0.50f, 120f,7.00f,0f,10.00f,7.00f,0f,19.51f,24.37f,-0.50f,120f,7.00f,0f,19.51f,24.37f,-0.50f,124.96f,24.37f,-0.50f, 120f,7.00f,0f,130f,10.00f,0f,134.55f,27.24f,-0.50f,120f,7.00f,0f,134.55f,27.24f,-0.50f,124.96f,24.37f,-0.50f, 10.00f,7.00f,0f,7f,10f,0f,16.64f,27.24f,-0.50f,10.00f,7.00f,0f,16.64f,27.24f,-0.50f,19.51f,24.37f,-0.50f, 350f,10.00f,0f,360f,7.00f,0f,355.04f,24.37f,-0.50f,350f,10.00f,0f,355.04f,24.37f,-0.50f,345.45f,27.24f,-0.50f, 473f,10f,0f,470f,7.00f,0f,460.49f,24.37f,-0.50f,473f,10f,0f,460.49f,24.37f,-0.50f,463.36f,27.24f,-0.50f, 473f,844f,0f,470f,847f,0f,460.49f,829.63f,-0.50f,473f,844f,0f,460.49f,829.63f,-0.50f,463.36f,826.76f,-0.50f, 360f,847f,0f,350f,844f,0f,345.45f,826.76f,-0.50f,360f,847f,0f,345.45f,826.76f,-0.50f,355.04f,829.63f,-0.50f, 130f,844f,0f,120f,847f,0f,124.96f,829.63f,-0.50f,130f,844f,0f,124.96f,829.63f,-0.50f,134.55f,826.76f,-0.50f, 7f,844f,0f,10f,847f,0f,19.51f,829.63f,-0.50f,7f,844f,0f,19.51f,829.63f,-0.50f,16.64f,826.76f,-0.50f, 19.51f,829.63f,-0.50f,124.96f,829.63f,-0.50f,136.47f,789.37f,-2f,19.51f,829.63f,-0.50f,136.47f,789.37f,-2f,41.56f,789.37f,-2f, 134.55f,826.76f,-0.50f,134.55f,836.34f,-0.50f,145.09f,795.41f,-2f,134.55f,826.76f,-0.50f,145.09f,795.41f,-2f,145.09f,786.78f,-2f, 345.45f,826.76f,-0.50f,345.45f,836.34f,-0.50f,334.91f,795.41f,-2f,345.45f,826.76f,-0.50f,334.91f,795.41f,-2f,334.91f,786.78f,-2f, 355.04f,829.63f,-0.50f,460.49f,829.63f,-0.50f,438.44f,789.37f,-2f,355.04f,829.63f,-0.50f,438.44f,789.37f,-2f,343.53f,789.37f,-2f, 460.49f,24.37f,-0.50f,355.04f,24.37f,-0.50f,343.53f,64.63f,-2f,460.49f,24.37f,-0.50f,343.53f,64.63f,-2f,438.44f,64.63f,-2f, 345.45f,27.24f,-0.50f,345.45f,17.66f,-0.50f,334.91f,58.59f,-2f,345.45f,27.24f,-0.50f,334.91f,58.59f,-2f,334.91f,67.22f,-2f, 134.55f,27.24f,-0.50f,134.55f,17.66f,-0.50f,145.09f,58.59f,-2f,134.55f,27.24f,-0.50f,145.09f,58.59f,-2f,145.09f,67.22f,-2f, 463.36f,826.76f,-0.50f,463.36f,27.24f,-0.50f,441.03f,67.22f,-2f,463.36f,826.76f,-0.50f,441.03f,67.22f,-2f,441.03f,786.78f,-2f, 16.64f,27.24f,-0.50f,16.64f,826.76f,-0.50f,38.97f,786.78f,-2f,16.64f,27.24f,-0.50f,38.97f,786.78f,-2f,38.97f,67.22f,-2f, 124.96f,24.37f,-0.50f,19.51f,24.37f,-0.50f,41.56f,64.63f,-2f,124.96f,24.37f,-0.50f,41.56f,64.63f,-2f,136.47f,64.63f,-2f, 124.96f,24.37f,-0.50f,134.55f,27.24f,-0.50f,145.09f,67.22f,-2f,124.96f,24.37f,-0.50f,145.09f,67.22f,-2f,136.47f,64.63f,-2f, 19.51f,24.37f,-0.50f,16.64f,27.24f,-0.50f,38.97f,67.22f,-2f,19.51f,24.37f,-0.50f,38.97f,67.22f,-2f,41.56f,64.63f,-2f, 345.45f,27.24f,-0.50f,355.04f,24.37f,-0.50f,343.53f,64.63f,-2f,345.45f,27.24f,-0.50f,343.53f,64.63f,-2f,334.91f,67.22f,-2f, 463.36f,27.24f,-0.50f,460.49f,24.37f,-0.50f,438.44f,64.63f,-2f,463.36f,27.24f,-0.50f,438.44f,64.63f,-2f,441.03f,67.22f,-2f, 463.36f,826.76f,-0.50f,460.49f,829.63f,-0.50f,438.44f,789.37f,-2f,463.36f,826.76f,-0.50f,438.44f,789.37f,-2f,441.03f,786.78f,-2f, 355.04f,829.63f,-0.50f,345.45f,826.76f,-0.50f,334.91f,786.78f,-2f,355.04f,829.63f,-0.50f,334.91f,786.78f,-2f,343.53f,789.37f,-2f, 134.55f,826.76f,-0.50f,124.96f,829.63f,-0.50f,136.47f,789.37f,-2f,134.55f,826.76f,-0.50f,136.47f,789.37f,-2f,145.09f,786.78f,-2f, 16.64f,826.76f,-0.50f,19.51f,829.63f,-0.50f,41.56f,789.37f,-2f,16.64f,826.76f,-0.50f,41.56f,789.37f,-2f,38.97f,786.78f,-2f, }; private float[] backgroundData = new float[] { // # ,Scale, Speed, 300 , 1.05f, .001f, 150 , 1.07f, .002f, 075 , 1.10f, .003f, 040 , 1.12f, .006f, 20 , 1.15f, .012f, 10 , 1.25f, .025f, 05 , 1.50f, .050f, 3 , 2.00f, .100f, 2 , 3.00f, .200f, }; private float[] triangleCoords = new float[] { 0, -25, 0, -.75f, -1, 0, +.75f, -1, 0, 0, +2, 0, -.99f, -1, 0, .99f, -1, 0, }; private float[] triangleColors = new float[] { 1.0f, 1.0f, 1.0f, 0.05f, 1.0f, 1.0f, 1.0f, 0.5f, 1.0f, 1.0f, 1.0f, 0.5f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.5f, 1.0f, 1.0f, 1.0f, 0.5f, }; private float[] drawArray2; private FloatBuffer drawBuffer2; private float[] colorArray2; private static FloatBuffer colorBuffer; private static FloatBuffer triangleBuffer; private static FloatBuffer quadBuffer; private static FloatBuffer drawBuffer; private float[] backgroundVerts; private FloatBuffer backgroundVertsWrapped; private float[] backgroundColors; private Buffer backgroundColorsWraped; private FloatBuffer backgroundColorsWrapped; private FloatBuffer arenaWallsWrapped; private FloatBuffer arenaColorsWrapped; private FloatBuffer arena2VertsWrapped; private FloatBuffer arena2ColorsWrapped; private long wallHitStartTime; private int wallHitDrawTime; private FloatBuffer pixelVertsWrapped; private float[] wallHit; private FloatBuffer pixelColorsWrapped; //private float[] pitVerts; private Resources lResources; private FloatBuffer pitVertsWrapped; private FloatBuffer pitColorsWrapped; private boolean arena2; private long lastStartTime; private long startTime; private int state=1; private long introEndTime; protected long introTotalTime =8000; protected long introStartTime; private boolean initDone= false; private static int stateIntro = 0; private static int stateGame = 1; public GlRenderer(spacehockey nspacehockey) { lResources = nspacehockey.getResources(); nspacehockey.SetHandlerToGLRenderer(new Handler() { @Override public void handleMessage(Message m) { if (m.what ==0){ wallHit = m.getData().getFloatArray("wall hit"); wallHitStartTime =System.currentTimeMillis(); wallHitDrawTime = 1000; }else if (m.what ==1){ //state = stateIntro; introEndTime= System.currentTimeMillis()+introTotalTime ; introStartTime = System.currentTimeMillis(); } }}); } public void onSurfaceCreated(GL10 gl, EGLConfig config) { gl.glShadeModel(GL10.GL_SMOOTH); gl.glClearColor(.01f, .01f, .01f, .1f); gl.glClearDepthf(1.0f); gl.glEnable(GL10.GL_DEPTH_TEST); gl.glDepthFunc(GL10.GL_LEQUAL); gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); } private float SumOfStrideI(float[] data, int offset, int stride) { int sum= 0; for (int i=offset;i<data.length-1;i=i+stride){ sum = (int) (data[i]+sum); } return sum; } public void onDrawFrame(GL10 gl) { if (state== stateIntro){DrawIntro(gl);} if (state== stateGame){DrawGame(gl);} } private void DrawIntro(GL10 gl) { startTime = System.currentTimeMillis(); if (startTime< introEndTime){ float ptd = (float)(startTime- introStartTime)/(float)introTotalTime; float ptl = 1-ptd; gl.glClear(GL10.GL_COLOR_BUFFER_BIT);//dont move gl.glMatrixMode(GL10.GL_MODELVIEW); int setVertOff = 0; gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); gl.glColorPointer(4, GL10.GL_FLOAT, 0, backgroundColorsWrapped); for (int i = 0; i < backgroundData.length / 3; i = i + 1) { int setoff = i * 3; int setVertLen = (int) backgroundData[setoff]; yoffs[i] = (backgroundData[setoff + 2]*(90+(ptl*250))) + yoffs[i]; if (yoffs[i] > Height) {yoffs[i] = 0;} gl.glPushMatrix(); //gl.glTranslatef(0, -(Height/2), 0); //gl.glScalef(1f, 1f+(ptl*2), 1f); //gl.glTranslatef(0, +(Height/2), 0); gl.glTranslatef(0, yoffs[i], i+60); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, backgroundVertsWrapped); gl.glDrawArrays(GL10.GL_TRIANGLES, (setVertOff * 2 * 3) - 0, (setVertLen * 2 * 3) - 1); gl.glTranslatef(0, -Height, 0); gl.glDrawArrays(GL10.GL_TRIANGLES, (setVertOff * 2 * 3) - 0, (setVertLen * 2 * 3) - 1); setVertOff = (int) (setVertOff + setVertLen); gl.glPopMatrix(); } gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_COLOR_ARRAY); }else{state = stateGame;} } private void DrawGame(GL10 gl) { lastStartTime = startTime; startTime = System.currentTimeMillis(); long moveTime = startTime-lastStartTime; gl.glClear(GL10.GL_COLOR_BUFFER_BIT);//dont move gl.glMatrixMode(GL10.GL_MODELVIEW); int setVertOff = 0; gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); gl.glColorPointer(4, GL10.GL_FLOAT, 0, backgroundColorsWrapped); for (int i = 0; i < backgroundData.length / 3; i = i + 1) { int setoff = i * 3; int setVertLen = (int) backgroundData[setoff]; yoffs[i] = (backgroundData[setoff + 2]*moveTime) + yoffs[i]; if (yoffs[i] > Height) {yoffs[i] = 0;} gl.glPushMatrix(); gl.glTranslatef(0, yoffs[i], i+60); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, backgroundVertsWrapped); gl.glDrawArrays(GL10.GL_TRIANGLES, (setVertOff * 6) - 0, (setVertLen *6) - 1); gl.glTranslatef(0, -Height, 0); gl.glDrawArrays(GL10.GL_TRIANGLES, (setVertOff * 6) - 0, (setVertLen *6) - 1); setVertOff = (int) (setVertOff + setVertLen); gl.glPopMatrix(); } //arena frame gl.glPushMatrix(); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, arenaWallsWrapped); gl.glColorPointer(4, GL10.GL_FLOAT, 0, arenaColorsWrapped); gl.glColor4f(.1f, .5f, 1f, 1f); gl.glTranslatef(0, 0, 50); gl.glDrawArrays(GL10.GL_TRIANGLES, 0, (int)(arenaWalls.length / 3)); gl.glPopMatrix(); //arena2 frame if (arena2 == true){ gl.glLoadIdentity(); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, pitVertsWrapped); gl.glColorPointer(4, GL10.GL_FLOAT, 0, pitColorsWrapped); gl.glTranslatef(0, -Height, 40); gl.glDrawArrays(GL10.GL_TRIANGLES, 0, (int)(pitVertsWrapped.capacity() / 3)); } if (wallHitStartTime != 0) { float timeRemaining = (wallHitStartTime + wallHitDrawTime)-System.currentTimeMillis(); if (timeRemaining>0) { gl.glPushMatrix(); float percentDone = 1-(timeRemaining/wallHitDrawTime); gl.glLoadIdentity(); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, pixelVertsWrapped); gl.glColorPointer(4, GL10.GL_FLOAT, 0, pixelColorsWrapped); gl.glTranslatef(wallHit[0], wallHit[1], 0); gl.glScalef(8, Height*percentDone, 0); gl.glDrawArrays(GL10.GL_TRIANGLES, 0, 12); gl.glPopMatrix(); } else { wallHitStartTime = 0; } } gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_COLOR_ARRAY); } public void init(GL10 gl) { if (arena2 == true) { AssetManager assetManager = lResources.getAssets(); try { // byte[] ba = {111,111}; DataInputStream Dis = new DataInputStream(assetManager .open("arena2.ogl")); pitVertsWrapped = LoadFloatArray.FromDataInputStream(Dis); pitColorsWrapped = MakeFakeLighting(pitVertsWrapped.array(), .25f, .50f, 1f, 200, .5f); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } if ((Height != 854) || (Width != 480)) { arenaWalls = ScaleFloats(arenaWalls, Width / 480f, Height / 854f); } arenaWallsWrapped = FloatBuffer.wrap(arenaWalls); arenaColorsWrapped = MakeFakeLighting(arenaWalls, .03f, .16f, .33f, .33f, 3); pixelVertsWrapped = FloatBuffer.wrap(pixelVerts); pixelColorsWrapped = MakeFakeLighting(pixelVerts, .03f, .16f, .33f, .10f, 20); initDone=true; } public void onSurfaceChanged(GL10 gl, int nwidth, int nheight) { Width= nwidth; Height = nheight; // avoid division by zero if (Height == 0) Height = 1; // draw on the entire screen gl.glViewport(0, 0, Width, Height); // setup projection matrix gl.glMatrixMode(GL10.GL_PROJECTION); gl.glLoadIdentity(); gl.glOrthof(0, Width, Height, 0, 100, -100); // gl.glOrthof(-nwidth*2, nwidth*2, nheight*2,-nheight*2, 100, -100); // GLU.gluPerspective(gl, 180.0f, (float)nwidth / (float)nheight, // 1000.0f, -1000.0f); gl.glEnable(GL10.GL_BLEND); gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); System.gc(); if (initDone == false){ SetupStars(); init(gl); } } public void SetupStars(){ backgroundVerts = new float[(int) SumOfStrideI(backgroundData,0,3)*triangleCoords.length]; backgroundColors = new float[(int) SumOfStrideI(backgroundData,0,3)*triangleColors.length]; int iii=0; int vc=0; float ascale=1; for (int i=0;i<backgroundColors.length-1;i=i+1){ if (iii==0){ascale = (float) Math.random();} if (vc==3){ backgroundColors[i]= (float) (triangleColors[iii]*(ascale)); }else if(vc==2){ backgroundColors[i]= (float) (triangleColors[iii]-(Math.random()*.2)); }else{ backgroundColors[i]= (float) (triangleColors[iii]-(Math.random()*.3)); } iii=iii+1;if (iii> triangleColors.length-1){iii=0;} vc=vc+1; if (vc>3){vc=0;} } int ii=0; int i =0; int set =0; while(ii<backgroundVerts.length-1){ float scale = (float) backgroundData[(set*3)+1]; int length= (int) backgroundData[(set*3)]; for (i=0;i<length;i=i+1){ if (set ==0){ AddVertsToArray(ScaleFloats(triangleCoords, scale,scale*.25f), backgroundVerts, (float)(Math.random()*Width),(float) (Math.random()*Height), ii); }else{ AddVertsToArray(ScaleFloats(triangleCoords, scale), backgroundVerts, (float)(Math.random()*Width),(float) (Math.random()*Height), ii);} ii=ii+triangleCoords.length; } set=set+1; } backgroundVertsWrapped = FloatBuffer.wrap(backgroundVerts); backgroundColorsWrapped = FloatBuffer.wrap(backgroundColors); } public void AddVertsToArray(float[] sva,float[]dva,float ox,float oy,int start){ //x for (int i=0;i<sva.length;i=i+3){ if((start+i)<dva.length){dva[start+i]= sva[i]+ox;} } //y for (int i=1;i<sva.length;i=i+3){ if((start+i)<dva.length){dva[start+i]= sva[i]+oy;} } //z for (int i=2;i<sva.length;i=i+3){ if((start+i)<dva.length){dva[start+i]= sva[i];} } } public FloatBuffer MakeFakeLighting(float[] sa,float r, float g,float b,float a,float multby){ float[] da = new float[((sa.length/3)*4)]; int vertex=0; for (int i=0;i<sa.length;i=i+3){ if (sa[i+2]>=1){ da[(vertex*4)+0]= r*multby*sa[i+2]; da[(vertex*4)+1]= g*multby*sa[i+2]; da[(vertex*4)+2]= b*multby*sa[i+2]; da[(vertex*4)+3]= a*multby*sa[i+2]; }else if (sa[i+2]<=-1){ float divisor = (multby*(-sa[i+2])); da[(vertex*4)+0]= r / divisor; da[(vertex*4)+1]= g / divisor; da[(vertex*4)+2]= b / divisor; da[(vertex*4)+3]= a / divisor; }else{ da[(vertex*4)+0]= r; da[(vertex*4)+1]= g; da[(vertex*4)+2]= b; da[(vertex*4)+3]= a; } vertex = vertex+1; } return FloatBuffer.wrap(da); } public float[] ScaleFloats(float[] va,float s){ float[] reta= new float[va.length]; for (int i=0;i<va.length;i=i+1){ reta[i]=va[i]*s; } return reta; } public float[] ScaleFloats(float[] va,float sx,float sy){ float[] reta= new float[va.length]; int cnt = 0; for (int i=0;i<va.length;i=i+1){ if (cnt==0){reta[i]=va[i]*sx;} else if (cnt==1){reta[i]=va[i]*sy;} else if (cnt==2){reta[i]=va[i];} cnt = cnt +1;if (cnt>2){cnt=0;} } return reta; } }

    Read the article

  • WD MBWE II (White Strip Light) 2TB - unable to access data

    - by user210477
    I have a WD MBWE II (White Strip Light) 2TB - (WD20000H2NC-00) Was working fine until a few days ago. I guess there was a power failure and after that I am unable to access the 'Public' or the 'Download' folder anymore. I have been searching for answers everywhere but came up empty handed. Web GUI still works, SSH works. I hooked up both the drives on my PC and UFS Explorer sees the drive. But so far I am unable to retrieve any of my data. I do not remember what RAID setting I used when I first got the drive. I can see from GUI that it is set as "Stripe". The drive contains 10 years of family pictures which I really do not want to loose. Sadly and stupidly, I didn't even keep a backup of this drive. Can somebody please help or point me in the right direction. Thank you in advance for your help. Disk Utility on Ubuntu reports 1405 bad sectors on one drive. How can I retrieve my data? Please help. Logs below: ~ # mdadm --detail /dev/md[012345678] /dev/md0: Version : 0.90 Creation Time : Wed Jul 15 08:36:17 2009 Raid Level : raid1 Array Size : 1959872 (1914.26 MiB 2006.91 MB) Used Dev Size : 1959872 (1914.26 MiB 2006.91 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:53:29 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 04f7a661:98983b3b:26b29e4f:9b646adb Events : 0.266 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 /dev/md1: Version : 0.90 Creation Time : Wed Jul 15 08:36:18 2009 Raid Level : raid1 Array Size : 256896 (250.92 MiB 263.06 MB) Used Dev Size : 256896 (250.92 MiB 263.06 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Wed Oct 30 22:08:21 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : aaa7b859:c475312d:efc5a766:6526b867 Events : 0.10 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 /dev/md2: Version : 0.90 Creation Time : Sat Sep 25 10:01:26 2010 Raid Level : raid0 Array Size : 1947045760 (1856.85 GiB 1993.77 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:30:53 2013 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : 01dae60a:6831077b:77f74530:8680c183 Events : 0.97 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 8 20 1 active sync /dev/sdb4 /dev/md3: Version : 0.90 Creation Time : Wed Jul 15 08:36:18 2009 Raid Level : raid1 Array Size : 987904 (964.91 MiB 1011.61 MB) Used Dev Size : 987904 (964.91 MiB 1011.61 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:26:33 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 3f4099f2:72e6171b:5ba962fd:48464a62 Events : 0.54 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 mdadm: md device /dev/md4 does not appear to be active. mdadm: md device /dev/md5 does not appear to be active. mdadm: md device /dev/md6 does not appear to be active. mdadm: md device /dev/md7 does not appear to be active. mdadm: md device /dev/md8 does not appear to be active. ~ # cat /etc/mtab securityfs /sys/kernel/security securityfs rw 0 0 /dev/md2 /DataVolume xfs rw,usrquota 0 0 /dev/md4 /ExtendVolume xfs rw,usrquota 0 0 ~ # df -k Filesystem 1k-blocks Used Available Use% Mounted on /dev/md0 1929044 145092 1685960 8% / /dev/md3 972344 123452 799500 13% /var /dev/ram0 63412 20 63392 0% /mnt/ram ~ # mdadm -D /dev/md2 /dev/md2: Version : 0.90 Creation Time : Sat Sep 25 10:01:26 2010 Raid Level : raid0 Array Size : 1947045760 (1856.85 GiB 1993.77 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:30:53 2013 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : 01dae60a:6831077b:77f74530:8680c183 Events : 0.97 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 8 20 1 active sync /dev/sdb4 ~ # mdadm -D /dev/md4 mdadm: md device /dev/md4 does not appear to be active. ~ # mount /dev/root on / type ext3 (rw,noatime,data=ordered) proc on /proc type proc (rw) sys on /sys type sysfs (rw) /dev/pts on /dev/pts type devpts (rw) securityfs on /sys/kernel/security type securityfs (rw) /dev/md3 on /var type ext3 (rw,noatime,data=ordered) /dev/ram0 on /mnt/ram type tmpfs (rw) ~ # cat /var/log/messages Oct 29 18:04:50 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 29 18:04:59 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 29 18:04:59 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 29 18:17:45 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 29 18:17:53 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 29 18:17:53 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 00:50:11 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 00:50:19 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 00:50:19 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 16:29:47 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 16:30:00 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 16:30:00 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 18:27:22 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 18:27:30 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 18:27:30 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 19:06:03 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 19:06:10 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 19:06:10 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 19:14:58 shmotashNAS daemon.warn wixEvent[3462]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 19:20:05 shmotashNAS daemon.alert wixEvent[3462]: Thermal Alarm - System temperature exceeded threshold.(66 degrees) Oct 30 19:58:29 shmotashNAS daemon.alert wixEvent[3462]: HDD SMART - HDD 1 SMART Health Status: Failed. Oct 30 22:05:39 shmotashNAS daemon.info init: Starting pid 13043, console /dev/null: '/usr/bin/killall' Oct 30 22:05:39 shmotashNAS syslog.info System log daemon exiting. Oct 30 22:08:09 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 22:08:09 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:19 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:25 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:37 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:44 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 22:08:46 [Version 01.09.00.96] ++++++++++++++ Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: ****** database does not exist. ret = -1, creating path Oct 30 22:08:49 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 22:08:50 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 22:08:52 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 22:08:52 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:09:10 shmotashNAS daemon.info init: Starting pid 4605, console /dev/null: '/bin/touch' Oct 30 22:09:10 shmotashNAS daemon.info init: Starting pid 4607, console /dev/ttyS0: '/sbin/getty' Oct 30 22:09:10 shmotashNAS daemon.info wixEvent[3557]: System Startup - System startup. Oct 30 22:09:16 shmotashNAS daemon.warn wixEvent[3557]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 22:14:14 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:14:21 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:14:21 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:29:36 shmotashNAS daemon.warn wixEvent[3557]: System Reboot - System will reboot. Oct 30 22:29:40 shmotashNAS daemon.info init: Starting pid 5974, console /dev/null: '/usr/bin/killall' Oct 30 22:29:40 shmotashNAS syslog.info System log daemon exiting. Oct 30 22:47:56 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 22:47:56 shmotashNAS daemon.warn wixEvent[3461]: Network Link - NIC 1 link is down. Oct 30 22:48:02 shmotashNAS daemon.info wixEvent[3461]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:48:02 shmotashNAS daemon.info wixEvent[3461]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 22:48:09 [Version 01.09.00.96] ++++++++++++++ Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 30 22:48:10 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 22:48:10 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 22:48:27 shmotashNAS daemon.info init: Starting pid 4079, console /dev/null: '/bin/touch' Oct 30 22:48:27 shmotashNAS daemon.info init: Starting pid 4080, console /dev/ttyS0: '/sbin/getty' Oct 30 22:48:28 shmotashNAS daemon.info wixEvent[3461]: System Startup - System startup. Oct 30 22:49:01 shmotashNAS daemon.warn wixEvent[3461]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 23:51:11 shmotashNAS daemon.warn wixEvent[3461]: System Reboot - System will reboot. Oct 30 23:51:16 shmotashNAS daemon.info init: Starting pid 6498, console /dev/null: '/usr/bin/killall' Oct 30 23:51:16 shmotashNAS syslog.info System log daemon exiting. Oct 30 23:54:19 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 23:55:37 shmotashNAS daemon.info wixEvent[3476]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 23:55:37 shmotashNAS daemon.info wixEvent[3476]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 23:55:44 [Version 01.09.00.96] ++++++++++++++ Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 30 23:55:45 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 23:55:45 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 23:55:58 shmotashNAS daemon.info init: Starting pid 4115, console /dev/null: '/bin/touch' Oct 30 23:55:58 shmotashNAS daemon.info init: Starting pid 4116, console /dev/ttyS0: '/sbin/getty' Oct 30 23:55:58 shmotashNAS daemon.info wixEvent[3476]: System Startup - System startup. Oct 30 23:56:33 shmotashNAS daemon.warn wixEvent[3476]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 31 00:29:14 shmotashNAS auth.info sshd[5409]: Server listening on 0.0.0.0 port 22. Oct 31 00:31:25 shmotashNAS auth.info sshd[5486]: Accepted password for root from 192.168.1.100 port 50785 ssh2 Oct 31 00:33:44 shmotashNAS auth.info sshd[5565]: Accepted password for root from 192.168.1.100 port 50817 ssh2 Oct 31 00:36:39 shmotashNAS daemon.info init: Starting pid 5680, console /dev/null: '/usr/bin/killall' Oct 31 00:36:39 shmotashNAS syslog.info System log daemon exiting. Oct 31 00:40:44 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 31 00:40:51 shmotashNAS daemon.info wixEvent[3464]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 31 00:40:51 shmotashNAS daemon.info wixEvent[3464]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:31 - 00:41:00 [Version 01.09.00.96] ++++++++++++++ Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 31 00:41:01 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 31 00:41:14 shmotashNAS daemon.info init: Starting pid 4101, console /dev/null: '/bin/touch' Oct 31 00:41:14 shmotashNAS daemon.info init: Starting pid 4102, console /dev/ttyS0: '/sbin/getty' Oct 31 00:41:15 shmotashNAS daemon.info wixEvent[3464]: System Startup - System startup. Oct 31 00:41:47 shmotashNAS daemon.warn wixEvent[3464]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 31 01:13:19 shmotashNAS daemon.info init: Starting pid 5385, console /dev/null: '/usr/bin/killall' Oct 31 01:13:19 shmotashNAS syslog.info System log daemon exiting. Nov 1 13:26:25 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Nov 1 13:26:32 shmotashNAS daemon.info wixEvent[3471]: Network Link - NIC 1 link is up 100 Mbps full duplex. Nov 1 13:26:32 shmotashNAS daemon.info wixEvent[3471]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:11:01 - 13:26:38 [Version 01.09.00.96] ++++++++++++++ Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: mc_db_init ... Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Nov 1 13:26:39 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Nov 1 13:26:39 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === inotify init done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === Walking directory done. Nov 1 13:26:52 shmotashNAS daemon.info init: Starting pid 4078, console /dev/null: '/bin/touch' Nov 1 13:26:52 shmotashNAS daemon.info init: Starting pid 4079, console /dev/ttyS0: '/sbin/getty' Nov 1 13:26:52 shmotashNAS daemon.info wixEvent[3471]: System Startup - System startup. Nov 1 13:27:28 shmotashNAS daemon.warn wixEvent[3471]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Nov 1 13:44:48 shmotashNAS auth.info sshd[5375]: Accepted password for root from 192.168.1.103 port 50217 ssh2 Nov 1 13:51:08 shmotashNAS auth.info sshd[5894]: Accepted password for root from 192.168.1.103 port 50380 ssh2

    Read the article

  • Using a mounted NTFS share with nginx

    - by Hoff
    I have set up a local testing VM with Ubuntu Server 12.04 LTS and the LEMP stack. It's kind of an unconventional setup because instead of having all my PHP scripts on the local machine, I've mounted an NTFS share as the document root because I do my development on Windows. I had everything working perfectly up until this morning, now I keep getting a dreaded 'File not found.' error. I am almost certain this must be somehow permission related, because if I copy my site over to /var/www, nginx and php-fpm have no problems serving my PHP scripts. What I can't figure out is why all of a sudden (after a reboot of the server), no PHP files will be served but instead just the 'File not found.' error. Static files work fine, so I think it's PHP that is causing the headache. Both nginx and php-fpm are configured to run as the user www-data: root@ubuntu-server:~# ps aux | grep 'nginx\|php-fpm' root 1095 0.0 0.0 5816 792 ? Ss 11:11 0:00 nginx: master process /opt/nginx/sbin/nginx -c /etc/nginx/nginx.conf www-data 1096 0.0 0.1 6016 1172 ? S 11:11 0:00 nginx: worker process www-data 1098 0.0 0.1 6016 1172 ? S 11:11 0:00 nginx: worker process root 1130 0.0 0.4 175560 4212 ? Ss 11:11 0:00 php-fpm: master process (/etc/php5/php-fpm.conf) www-data 1131 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www www-data 1132 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www www-data 1133 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www root 1686 0.0 0.0 4368 816 pts/1 S+ 11:11 0:00 grep --color=auto nginx\|php-fpm I have mounted the NTFS share at /mnt/webfiles by editing /etc/fstab and adding the following line: //192.168.0.199/c$/Websites/ /mnt/webfiles cifs username=Jordan,password=mypasswordhere,gid=33,uid=33 0 0 Where gid 33 is the www-data group and uid 33 is the user www-data. If I list the contents of one of my sites you can in fact see that they belong to the user www-data: root@ubuntu-server:~# ls -l /mnt/webfiles/nTv5-2.0 total 8 drwxr-xr-x 0 www-data www-data 0 Jun 6 19:12 app drwxr-xr-x 0 www-data www-data 0 Aug 22 19:00 assets -rwxr-xr-x 0 www-data www-data 1150 Jan 4 2012 favicon.ico -rwxr-xr-x 0 www-data www-data 1412 Dec 28 2011 index.php drwxr-xr-x 0 www-data www-data 0 Jun 3 16:44 lib drwxr-xr-x 0 www-data www-data 0 Jan 3 2012 plugins drwxr-xr-x 0 www-data www-data 0 Jun 3 16:45 vendors If I switch to the www-data user, I have no problem creating a new file on the share: root@ubuntu-server:~# su www-data $ > /mnt/webfiles/test.txt $ ls -l /mnt/webfiles | grep test\.txt -rwxr-xr-x 0 www-data www-data 0 Sep 8 11:19 test.txt There should be no problem reading or writing to the share with php-fpm running as the user www-data. When I examine the error log of nginx, it's filled with a bunch of lines that look like the following: 2012/09/08 11:22:36 [error] 1096#0: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.0.199, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.0.123" 2012/09/08 11:22:39 [error] 1096#0: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.0.199, server: , request: "GET /apc.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.0.123" It's bizarre that this was working previously and now all of sudden PHP is complaining that it can't "find" the scripts on the share. Does anybody know why this is happening? EDIT I tried editing php-fpm.conf and changing chdir to the following: chdir = /mnt/webfiles When I try and restart the php-fpm service, I get the error: Starting php-fpm [08-Sep-2012 14:20:55] ERROR: [pool www] the chdir path '/mnt/webfiles' does not exist or is not a directory This is a total load of bullshit because this directory DOES exist and is mounted! Any ls commands to list that directory work perfectly. Why the hell can't PHP-FPM see this directory?! Here are my configuration files for reference: nginx.conf user www-data; worker_processes 2; error_log /var/log/nginx/nginx.log info; pid /var/run/nginx.pid; events { worker_connections 1024; multi_accept on; } http { include fastcgi.conf; include mime.types; default_type application/octet-stream; set_real_ip_from 127.0.0.1; real_ip_header X-Forwarded-For; ## Proxy proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 32m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; ## Compression gzip on; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; ### TCP options tcp_nodelay on; tcp_nopush on; keepalive_timeout 65; sendfile on; include /etc/nginx/sites-enabled/*; } my site config server { listen 80; access_log /var/log/nginx/$host.access.log; error_log /var/log/nginx/error.log; root /mnt/webfiles/nTv5-2.0/app/webroot; index index.php; ## Block bad bots if ($http_user_agent ~* (HTTrack|HTMLParser|libcurl|discobot|Exabot|Casper|kmccrew|plaNETWORK|RPT-HTTPClient)) { return 444; } ## Block certain Referers (case insensitive) if ($http_referer ~* (sex|vigra|viagra) ) { return 444; } ## Deny dot files: location ~ /\. { deny all; } ## Favicon Not Found location = /favicon.ico { access_log off; log_not_found off; } ## Robots.txt Not Found location = /robots.txt { access_log off; log_not_found off; } if (-f $document_root/maintenance.html) { rewrite ^(.*)$ /maintenance.html last; } location ~* \.(?:ico|css|js|gif|jpe?g|png)$ { # Some basic cache-control for static files to be sent to the browser expires max; add_header Pragma public; add_header Cache-Control "max-age=2678400, public, must-revalidate"; } location / { try_files $uri $uri/ index.php; if (-f $request_filename) { break; } rewrite ^(.+)$ /index.php?url=$1 last; } location ~ \.php$ { include /etc/nginx/fastcgi.conf; fastcgi_pass unix:/var/run/php5-fpm.sock; } } php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix (/opt/php5). This prefix can be dynamicaly changed by using the ; '-p' argument from the command line. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. ; Relative path can also be used. They will be prefixed by: ; - the global prefix if it's been set (-p arguement) ; - /opt/php5 otherwise ;include=etc/fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Note: the default prefix is /opt/php5/var ; Default Value: none pid = /var/run/php-fpm.pid ; Error log file ; Note: the default prefix is /opt/php5/var ; Default Value: log/php-fpm.log error_log = /var/log/php5-fpm/php-fpm.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes ;daemonize = yes ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; Multiple pools of child processes may be started with different listening ; ports and different management options. The name of the pool will be ; used in logs and stats. There is no limitation on the number of pools which ; FPM can handle. Your system will tell you anyway :) ; Start a new pool named 'www'. ; the variable $pool can we used in any directive and will be replaced by the ; pool name ('www' here) [www] ; Per pool prefix ; It only applies on the following directives: ; - 'slowlog' ; - 'listen' (unixsocket) ; - 'chroot' ; - 'chdir' ; - 'php_values' ; - 'php_admin_values' ; When not set, the global prefix (or /opt/php5) applies instead. ; Note: This directive can also be relative to the global prefix. ; Default Value: none ;prefix = /path/to/pools/$pool ; The address on which to accept FastCGI requests. ; Valid syntaxes are: ; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific address on ; a specific port; ; 'port' - to listen on a TCP socket to all addresses on a ; specific port; ; '/path/to/unix/socket' - to listen on a unix socket. ; Note: This value is mandatory. ;listen = 127.0.0.1:9000 listen = /var/run/php5-fpm.sock ; Set listen(2) backlog. A value of '-1' means unlimited. ; Default Value: 128 (-1 on FreeBSD and OpenBSD) ;listen.backlog = -1 ; List of ipv4 addresses of FastCGI clients which are allowed to connect. ; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original ; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address ; must be separated by a comma. If this value is left blank, connections will be ; accepted from any ip address. ; Default Value: any ;listen.allowed_clients = 127.0.0.1 ; Set permissions for unix socket, if one is used. In Linux, read/write ; permissions must be set in order to allow connections from a web server. Many ; BSD-derived systems allow connections regardless of permissions. ; Default Values: user and group are set as the running user ; mode is set to 0666 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 ; Unix user/group of processes ; Note: The user is mandatory. If the group is not set, the default user's group ; will be used. user = www-data group = www-data ; Choose how the process manager will control the number of child processes. ; Possible Values: ; static - a fixed number (pm.max_children) of child processes; ; dynamic - the number of child processes are set dynamically based on the ; following directives: ; pm.max_children - the maximum number of children that can ; be alive at the same time. ; pm.start_servers - the number of children created on startup. ; pm.min_spare_servers - the minimum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is less than this ; number then some children will be created. ; pm.max_spare_servers - the maximum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is greater than this ; number then some children will be killed. ; Note: This value is mandatory. pm = dynamic ; The number of child processes to be created when pm is set to 'static' and the ; maximum number of child processes to be created when pm is set to 'dynamic'. ; This value sets the limit on the number of simultaneous requests that will be ; served. Equivalent to the ApacheMaxClients directive with mpm_prefork. ; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP ; CGI. ; Note: Used when pm is set to either 'static' or 'dynamic' ; Note: This value is mandatory. pm.max_children = 50 ; The number of child processes created on startup. ; Note: Used only when pm is set to 'dynamic' ; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2 pm.start_servers = 20 ; The desired minimum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.min_spare_servers = 5 ; The desired maximum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.max_spare_servers = 35 ; The number of requests each child process should execute before respawning. ; This can be useful to work around memory leaks in 3rd party libraries. For ; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS. ; Default Value: 0 pm.max_requests = 500 ; The URI to view the FPM status page. If this value is not set, no URI will be ; recognized as a status page. By default, the status page shows the following ; information: ; accepted conn - the number of request accepted by the pool; ; pool - the name of the pool; ; process manager - static or dynamic; ; idle processes - the number of idle processes; ; active processes - the number of active processes; ; total processes - the number of idle + active processes. ; max children reached - number of times, the process limit has been reached, ; when pm tries to start more children (works only for ; pm 'dynamic') ; The values of 'idle processes', 'active processes' and 'total processes' are ; updated each second. The value of 'accepted conn' is updated in real time. ; Example output: ; accepted conn: 12073 ; pool: www ; process manager: static ; idle processes: 35 ; active processes: 65 ; total processes: 100 ; max children reached: 1 ; By default the status page output is formatted as text/plain. Passing either ; 'html' or 'json' as a query string will return the corresponding output ; syntax. Example: ; http://www.foo.bar/status ; http://www.foo.bar/status?json ; http://www.foo.bar/status?html ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set pm.status_path = /status ; The ping URI to call the monitoring page of FPM. If this value is not set, no ; URI will be recognized as a ping page. This could be used to test from outside ; that FPM is alive and responding, or to ; - create a graph of FPM availability (rrd or such); ; - remove a server from a group if it is not responding (load balancing); ; - trigger alerts for the operating team (24/7). ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ping.path = /ping ; This directive may be used to customize the response of a ping request. The ; response is formatted as text/plain with a 200 response code. ; Default Value: pong ping.response = pong ; The timeout for serving a single request after which the worker process will ; be killed. This option should be used when the 'max_execution_time' ini option ; does not stop script execution for some reason. A value of '0' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_terminate_timeout = 0 ; The timeout for serving a single request after which a PHP backtrace will be ; dumped to the 'slowlog' file. A value of '0s' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_slowlog_timeout = 0 ; The log file for slow requests ; Default Value: not set ; Note: slowlog is mandatory if request_slowlog_timeout is set ;slowlog = log/$pool.log.slow ; Set open file descriptor rlimit. ; Default Value: system defined value ;rlimit_files = 1024 ; Set max core size rlimit. ; Possible Values: 'unlimited' or an integer greater or equal to 0 ; Default Value: system defined value ;rlimit_core = 0 ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot ;chdir = /var/www ; Redirect worker stdout and stderr into main error log. If not set, stdout and ; stderr will be redirected to /dev/null according to FastCGI specs. ; Note: on highloaded environement, this can cause some delay in the page ; process time (several ms). ; Default Value: no ;catch_workers_output = yes ; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from ; the current environment. ; Default Value: clean env ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ; Additional php.ini defines, specific to this pool of workers. These settings ; overwrite the values previously defined in the php.ini. The directives are the ; same as the PHP SAPI: ; php_value/php_flag - you can set classic ini defines which can ; be overwritten from PHP call 'ini_set'. ; php_admin_value/php_admin_flag - these directives won't be overwritten by ; PHP call 'ini_set' ; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no. ; Defining 'extension' will load the corresponding shared extension from ; extension_dir. Defining 'disable_functions' or 'disable_classes' will not ; overwrite previously defined php.ini values, but will append the new value ; instead. ; Note: path INI options can be relative and will be expanded with the prefix ; (pool, global or /opt/php5) ; Default Value: nothing is defined by default except the values in php.ini and ; specified at startup with the -d argument ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i

    Read the article

  • SOA Suite 11g Native Format Builder Complex Format Example

    - by bob.webster
    This rather long posting details the steps required to process a grouping of fixed length records using Format Builder.   If it’s 10 pm and you’re feeling beat you might want to leave this until tomorrow.  But if it’s 10 pm and you need to get a Format Builder Complex template done, read on… The goal is to process individual orders from a file using the 11g File Adapter and Format Builder Sample Data =========== 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 301Hexagon Widget         1150.98 ORD6735502/01/2011 The records are fixed length records representing a number of logical Order records. Each order record consists of a number of item records starting with a 3 digit number, followed by a single Summary Record which starts with the constant ORD. How can this file be processed so that the first poll returns the first order? 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 And the second poll returns the second order? 301Hexagon Widget           1150.98 ORD6735502/01/2011 Note: if you need more than one order per poll, that’s also possible, see the “Multiple Messages” field in the “File Adapter Step 6 of 9” snapshot further down.   To follow along with this example you will need - Studio Edition Version 11.1.1.4.0    with the   - SOA Extension for JDeveloper 11.1.1.4.0 installed Both can be downloaded from here:  http://www.oracle.com/technetwork/middleware/soasuite/downloads/index.html You will not need a running WebLogic Server domain to complete the steps and Format Builder tests in this article.     Start with a SOA Composite containing a File Adapter The Format Builder is part of the File Adapter so start by creating a new SOA Project and Composite. Here is a quick summary for those not familiar with these steps - Start JDeveloper - From the Main Menu choose File->New - In the New Gallery window that opens Expand the “General” category and Select the Applications node.   Then choose SOA Application from the Items section on the right.  Finally press the OK button. - In Step 1 of the “Create SOA Application wizard” that appears enter an Application Name and an Directory of your     choice,   then press the Next button. - In Step 2 of the “Create SOA Application wizard”, press the Next button leaving all entries as defaulted. - In Step 3 of the “Create SOA Application wizard”, Enter a composite name of your choice and Press the Finish   Button These steps result in a new Application and SOA Project. The SOA Project contains a composite.xml file which is opened and shown below. For our example we have not defined a Mediator or a BPEL process to minimize the steps, but one or the other would eventually be needed to use the File Adapter we are about to create. Drag and drop the File Adapter icon from the Component Pallette onto either the LEFT side of the diagram under “Exposed Services” or the right side under “External References”.  (See the Green Circle in the image below).  Placing the adapter on the left side would indicate the file being processed is inbound to the composite, if the adapter is placed on the right side then the data is outbound to a file.     Note that the same Format Builder definition can be used in both directions.  For example we could use the format with a File Adapter on the left side of the composite to parse fixed data into XML, modify the data in our Composite or BPEL process and then use the same Format Builder definition with a File adapter on the right side of the composite to write the data back out in the same fixed data format When the File Adapter is dropped on the Composite the File Adapter Wizard Appears. Skip Past the first page, Step 1 of 9 by pressing the Next button. In Step 2 enter a service name of your choice as shown below, then press Next   When the Native Format Builder appears, skip the welcome page by pressing next. Also press the Next button to accept the settings on Step 3 of 9 On Step 4, select Read File and press the Next button as shown below.   On Step 5 enter a directory that will contain a file with the input data, then  Press the Next button as shown below. In step 6, enter *.txt or another file format to select input files from the input directory mentioned in step 5. ALSO check the “Files contain Multiple Messages” checkbox and set the “Publish Messages in Batches of” field to 1.  The value can be set higher to increase the number of logical order group records returned on each poll of the file adapter.  In other words, it determines the number of Orders that will be sent to each instance of a Mediator or Composite processing using the File Adapter.   Skip Step 7 by pressing the Next button In Step 8 press the Gear Icon on the right side to load the Native Format Builder.       Native Format Builder  appears Before diving into the format, here is an overview of the process. Approach - Bottom up Assuming an Order is a grouping of item records and a summary record…. - Define a separate  Complex Type for each Record Type found in the group.    (One for itemRecord and one for summaryRecord) - Define a Complex Type to contain the Group of Record types defined above   (LogicalOrderRecord) - Define a top level element to represent an order.  (order)   The order element will be of type LogicalOrderRecord   Defining the Format In Step 1 select   “Create new”  and  “Complex Type” and “Next”   In Step two browse to and select a file containing the test data shown at the start of this article. A link is provided at the end of this article to download a file containing the test data. Press the Next button     In Step 3 Complex types must be define for each type of input record. Select the Root-Element and Click on the Add Complex Type icon This creates a new empty complex type definition shown below. The fastest way to create the definition is to highlight the first line of the Sample File data and drag the line onto the  <new_complex_type> Format Builder introspects the data and provides a grid to define additional fields. Change the “Complex Type Name” to  “itemRecord” Then click on the ruler to indicate the position of fixed columns.  Drag the red triangle icons to the exact columns if necessary. Double click on an existing red triangle to remove an unwanted entry. In the case below fields are define in columns 0-3, 4-28, 29-eol When the field definitions are correct, press the “Generate Fields” button. Field entries named C1, C2 and C3 will be created as shown below. Click on the field names and rename them from C1->itemNum, C2->itemDesc and C3->itemCost  When all the fields are correctly defined press OK to save the complex type.        Next, the process is repeated to define a Complex Type for the SummaryRecord. Select the Root-Element in the schema tree and press the new complex type icon Then highlight and drag the Summary Record from the sample data onto the <new_complex_type>   Change the complex type name to “summaryRecord” Mark the fixed fields for Order Number and Order Date. Press the Generate Fields button and rename C1 and C2 to itemNum and orderDate respectively.   The last complex type to be defined is a type to hold the group of items and the summary record. Select the Root-Element in the schema tree and click the new complex type icon Select the “<new_complex_type>” entry and click the pencil icon   On the Complex Type Details page change the name and type of each input field. Change line 1 to be named item and set the Type  to “itemRecord” Change line 2 to be named summary and set the Type to “summaryRecord” We also need to indicate that itemRecords repeat in the input file. Click the pencil icon at the right side of the item line. On the Edit Details page change the “Max Occurs” entry from 1 to UNBOUNDED. We also need to indicate how to identify an itemRecord.  Since each item record has “.” in column 32 we can use this fact to differentiate an item record from a summary record. Change the “Look Ahead” field to value 32 and enter a period in the “Look For” field Press the OK button to save entry.     Finally, its time to create a top level element to represent an order. Select the “Root-Element” in the schema tree and press the New element icon Click on the <new_element> and press the pencil icon.   Set the Element Name to “order” and change the Data Type to “logicalOrderRecord” Press the OK button to save the element definition.   The final definition should match the screenshot below. Press the Next Button to view the definition source.     Press the Test Button to test the definition   Press the Green Triangle Icon to run the test.   And we are presented with an unwelcome error. The error states that the processor ran out of data while working through the definition. The processor was unable to differentiate between itemRecords and summaryRecords and therefore treated the entire file as a list of itemRecords.  At end of file, the “summary” portion of the logicalOrderRecord remained unprocessed but mandatory.   This root cause of this error is the loss of our “lookAhead” definition used to identify itemRecords. This appears to be a bug in the  Native Format Builder 11.1.1.4.0 Luckily, a simple workaround exists. Press the Cancel button and return to the “Step 4 of 4” Window. Manually add    nxsd:lookAhead="32" nxsd:lookFor="."   attributes after the maxOccurs attribute of the item element. as shown in the highlighted text below.   When the lookAhead and lookFor attributes have been added Press the Test button and on the Test page press the Green Triangle. The test is now successful, the first order in the file is returned by the File Adapter.     Below is a complete listing of the Result XML from the right column of the screen above   Try running it The downloaded input test file and completed schema file can be used for testing without following all the Native Format Builder steps in this example. Use the following link to download a file containing the sample data. Download Sample Input Data This is the best approach rather than cutting and pasting the input data at the top of the article.  Since the data is fixed length it’s very important to watch out for trailing spaces in the data and to ensure an eol character at the end of every line. The download file is correctly formatted. The final schema definition can be downloaded at the following link Download Completed Schema Definition   - Save the inputData.txt file to a known location like the xsd folder in your project. - Save the inputData_6.xsd file to the xsd folder in your project. - At step 1 in the Native Format Builder wizard  (as shown above) check the “Edit existing” radio button,    then browse and select the inputData_6.xsd file - At step 2 of the Format Builder configuration Wizard (as shown above) supply the path and filename for    the inputData.txt file. - You can then proceed to the test page and run a test. - Remember the wizard bug will drop the lookAhead and lookFor attributes,  you will need to manually add   nxsd:lookAhead="32" nxsd:lookFor="."    after the maxOccurs attribute of the item element in the   LogicalOrderRecord Complex Type.  (as shown above)   Good Luck with your Format Project

    Read the article

  • Quick guide to Oracle IRM 11g: Configuring SSL

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index So far in this guide we have an IRM Server up and running, however I skipped over SSL configuration in the previous article because I wanted to focus in more detail now. You can, if you wish, not bother with setting up SSL, but considering this is a security technology it is worthwhile doing. Contents Setting up a one way, self signed SSL certificate in WebLogic Setting up an official SSL certificate in Apache 2.x Configuring Apache to proxy traffic to the IRM server There are two common scenarios in which an Oracle IRM server is configured. For a development or evaluation system, people usually communicate directly to the WebLogic Server running the IRM service. However in a production environment and for some proof of concept evaluations that require a setup reflecting a production system, the traffic to the IRM server travels via a web server proxy, commonly Apache. In this guide we are building an Oracle Enterprise Linux based IRM service and this article will go over the configuration of SSL in WebLogic and also in Apache. Like in the past articles, we are going to use two host names in the configuration below,irm.company.com will refer to the public Apache server irm.company.internal will refer to the internal WebLogic IRM server Setting up a one way, self signed SSL certificate in WebLogic First lets look at creating just a simple self signed SSL certificate to be used in WebLogic. This is a quick and easy way to get SSL working in your environment, however the downside is that no browsers are going to trust this certificate you create and you'll need to manually install the certificate onto any machine's communicating with the server. This is fine for development or when you have only a few users evaluating the system, but for any significant use it's usually better to have a fully trusted certificate in use and I explain that in the next section. But for now lets go through creating, installing and testing a self signed certificate. We use a library in Java to create the certificates, open a console and running the following commands. Note you should choose your own secure passwords whenever you see password below. [oracle@irm /] source /oracle/middleware/wlserver_10.3/server/bin/setWLSEnv.sh [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irm /] java utils.CertGen -selfsigned -certfile MyOwnSelfCA.cer -keyfile MyOwnSelfKey.key -keyfilepass password -cn "irm.oracle.demo" [oracle@irm /] java utils.ImportPrivateKey -keystore MyOwnIdentityStore.jks -storepass password -keypass password -alias trustself -certfile MyOwnSelfCA.cer.pem -keyfile MyOwnSelfKey.key.pem -keyfilepass password [oracle@irm /] keytool -import -trustcacerts -alias trustself -keystore TrustMyOwnSelf.jks -file MyOwnSelfCA.cer.der -keyalg RSA We now have two Java Key Stores, MyOwnIdentityStore.jks and TrustMyOwnSelf.jks. These contain keys and certificates which we will use in WebLogic Server. Now we need to tell the IRM server to use these stores when setting up SSL connections for incoming requests. Make sure the Admin server is running and login into the WebLogic Console at http://irm.company.intranet:7001/console and do the following; In the menu on the left, select the + next to Environment to expose the submenu, then click on Servers. You will see two servers in the list, AdminServer(admin) and IRM_server1. If the IRM server is running, shut it down either by hitting CONTROL + C in the console window it was started from, or you can switch to the CONTROL tab, select IRM_server1 and then select the Shutdown menu and then Force Shutdown Now. In the Configuration tab select IRM_server1 and switch to the Keystores tab. By default WebLogic Server uses it's own demo identity and trust. We are now going to switch to the self signed one's we've just created. So select the Change button and switch to Custom Identity and Custom Trust and hit save. Now we have to complete the resulting fields, the setting's i've used in my evaluation server are below. IdentityCustom Identity Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/MyOwnIdentityStore.jks Custom Identity Keystore Type: JKS Custom Identity Keystore Passphrase: password Confirm Custom Identity Keystore Passphrase: password TrustCustom Trust Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/TrustMyOwnSelf.jks Custom Trust Keystore Type: JKS Custom Trust Keystore Passphrase: password Confirm Custom Trust Keystore Passphrase: password Now click on the SSL tab for the IRM_server1 and enter in the alias and passphrase, in my demo here the details are; IdentityPrivate Key Alias: trustself Private Key Passphrase: password Confirm Private Key Passphrase: password And hit save. Now lets test a connection to the IRM server over HTTPS using SSL. Go back to a console window and start the IRM server, a quick reminder on how to do this is... [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/bin [oracle@irm /] ./startManagedWeblogic IRM_server1 Once running, open a browser and head to the SSL port of the server. By default the IRM server will be listening on the URL https://irm.company.intranet:16101/irm_rights. Note in the example image on the right the port is 7002 because it's a system that has the IRM services installed on the Admin server, this isn't typical (or advisable). Your system is going to have a separate managed server which will be listening on port 16101. Once you open this address you will notice that your browser is going to complain that the server certificate is untrusted. The images on the right show how Firefox displays this error. You are going to be prompted every time you create a new SSL session with the server, both from the browser and more annoyingly from the IRM Desktop. If you plan on always using a self signed certificate, it is worth adding it to the Windows certificate store so that when you are accessing sealed content you do not keep being informed this certificate is not trusted. Follow these instructions (which are for Internet Explorer 8, they may vary for your version of IE.) Start Internet Explorer and open the URL to your IRM server over SSL, e.g. https://irm.company.intranet:16101/irm_rights. IE will complain that about the certificate, click on Continue to this website (not recommended). From the IE Tools menu select Internet Options and from the resulting dialog select Security and then click on Trusted Sites and then the Sites button. Add to the list of trusted sites a URL which mates the server you are accessing, e.g. https://irm.company.intranet/ and select OK. Now refresh the page you were accessing and next to the URL you should see a red cross and the words Certificate Error. Click on this button and select View Certificates. You will now see a dialog with the details of the self signed certificate and the Install Certificate... button should be enabled. Click on this to start the wizard. Click next and you'll be asked where you should install the certificate. Change the option to Place all certificates in the following store. Select browse and choose the Trusted Root Certification Authorities location and hit OK. You'll then be prompted to install the certificate and answer yes. You also need to import the root signed certificate into the same location, so once again select the red Certificate Error option and this time when viewing the certificate, switch to the Certification Path tab and you should see a CertGenCAB certificate. Select this and then click on View Certificate and go through the same process as above to import the certificate into the store. Finally close all instances of the IE browser and re-access the IRM server URL again, this time you should not receive any errors. Setting up an official SSL certificate in Apache 2.x At this point we now have an IRM server that you can communicate with over SSL. However this certificate isn't trusted by any browser because it's path of trust doesn't end in a recognized certificate authority (CA). Also you are communicating directly to the WebLogic Server over a non standard SSL port, 16101. In a production environment it is common to have another device handle the initial public internet traffic and then proxy this to the WebLogic server. The diagram below shows a very simplified view of this type of deployment. What i'm going to walk through next is configuring Apache to proxy traffic to a WebLogic server and also to use a real SSL certificate from an official CA. First step is to configure Apache to handle incoming requests over SSL. In this guide I am configuring the IRM service in Oracle Enterprise Linux 5 update 3 and Apache 2.2.3 which came with OpenSSL and mod_ssl components. Before I purchase an SSL certificate, I need to generate a certificate request from the server. Oracle.com uses Verisign and for my own personal needs I use cheaper certificates from GoDaddy. The following instructions are specific to Apache, but there are many references out there for other web servers. For Apache I have OpenSSL and the commands are; [oracle@irm /] cd /usr/bin [oracle@irm bin] openssl genrsa -des3 -out irm-apache-server.key 2048 Generating RSA private key, 2048 bit long modulus ............................+++ .........+++ e is 65537 (0x10001) Enter pass phrase for irm-apache-server.key: Verifying - Enter pass phrase for irm-apache-server.key: [oracle@irm bin] openssl req -new -key irm-apache-server.key -out irm-apache-server.csr Enter pass phrase for irm-apache-server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:CA Locality Name (eg, city) [Newbury]:San Francisco Organization Name (eg, company) [My Company Ltd]:Oracle Organizational Unit Name (eg, section) []:Security Common Name (eg, your name or your server's hostname) []:irm.company.com Email Address []:[email protected] Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:testing An optional company name []: You must make sure to remember the pass phrase you used in the initial key generation, you will need this when later configuring Apache. In the /usr/bin directory there are now two new files. The irm-apache-server.csr contains our certificate request and is what you cut and paste, or upload, to your certificate authority when you purchase and validate your SSL certificate. In response you will typically get two files. Your server certificate and another certificate file that will likely contain a set of certificates from your CA which validate your certificate's trust. Next we need to configure Apache to use these files. Typically there is an ssl.conf file which is where all the SSL configuration is done. On my Oracle Enterprise Linux server this file is located in /etc/httpd/conf.d/ssl.conf and i've added the following lines. <VirtualHost irm.company.com> # Setup SSL for irm.company.com ServerName irm.company.com SSLEngine On SSLCertificateFile /oracle/secure/irm.company.com.crt SSLCertificateKeyFile /oracle/secure/irm.company.com.key SSLCertificateChainFile /oracle/secure/gd_bundle.crt </VirtualHost> Restarting Apache (apachectl restart) and I can now attempt to connect to the Apache server in a web browser, https://irm.company.com/. If all is configured correctly I should now see an Apache test page delivered to me over HTTPS. Configuring Apache to proxy traffic to the IRM server Final piece in setting up SSL is to have Apache proxy requests for the IRM server but do so securely. So the requests to Apache will be over HTTPS using a legitimate certificate, but we can also configure Apache to proxy these requests internally across to the IRM server using SSL with the self signed certificate we generated at the start of this article. To do this proxying we use the WebLogic Web Server plugin for Apache which you can download here from Oracle. Download the zip file and extract onto the server. The file extraction reveals a set of zip files, each one specific to a supported web server. In my instance I am using Apache 2.2 32bit on an Oracle Enterprise Linux, 64 bit server. If you are not sure what version your Apache server is, run the command /usr/sbin/httpd -V and you'll see version and it its 32 or 64 bit. Mine is a 32bit server so I need to extract the file WLSPlugin1.1-Apache2.2-linux32-x86.zip. The from the resulting lib folder copy the file mod_wl.so into /usr/lib/httpd/modules/. First we want to test that the plug in will work for regular HTTP traffic. Edit the httpd.conf for Apache and add the following section at the bottom. LoadModule weblogic_module modules/mod_wl.so <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16100    WLLogFile /tmp/wl-proxy.log </IfModule> <Location /irm_rights>    SetHandler weblogic-handler </Location> <Location /irm_desktop>    SetHandler weblogic-handler </Location> <Location /irm_sealing>    SetHandler weblogic-handler </Location> <Location /irm_services>    SetHandler weblogic-handler </Location> Now restart Apache again (apachectl restart) and now open a browser to http://irm.company.com/irm_rights. Apache will proxy the HTTP traffic from the port 80 of your Apache server to the IRM service listening on port 16100 of the WebLogic Managed server. Note above I have included all four of the Locations you might wish to proxy. http://irm.company.internalirm_rights is the URL to the management website, /irm_desktop is the URL used for the IRM Desktop to communicate. irm_sealing is for web services based document sealing and irm_services is for IRM server web services. The last two are typically only used when you have the IRM server integrated with another application and it is unlikely you'd be accessing these resources from the public facing Apache server. However, just in case, i've mentioned them above. Now let's enable SSL communication from Apache to WebLogic. In the ZIP file we extracted were some more modules we need to copy into the Apache folder. Looking back in the lib that we extracted, there are some more files. Copy the following into the /usr/lib/httpd/modules/ folder. libwlssl.so libnnz11.so libclntsh.so.11.1 Now the documentation states that should only need to do this, but I found that I also needed to create an environment variable called LD_LIBRARY_PATH and point this to the folder /usr/lib/httpd/modules/. If I didn't do this, starting Apache with the WebLogic module configured to SSL would throw the error. [crit] (20014)Internal error: WL SSL Init failed for server: (null) on 0 So I had to edit the file /etc/profile and add the following lines at the bottom. You may already have the LD_LIBRARY_PATH variable defined, therefore simply add this path to it. LD_LIBRARY_PATH=/usr/lib/httpd/modules/ export LD_LIBRARY_PATH Now the WebLogic plug in uses an Oracle Wallet to store the required certificates.You'll need to copy the self signed certificate from the IRM server over to the Apache server. Copy over the MyOwnSelfCA.cer.der into the same folder where you are storing your public certificates, in my example this is /oracle/secure. It's worth mentioning these files should ONLY be readable by root (the user Apache runs as). Now lets create an Oracle Wallet and import the self signed certificate from the IRM server. The file orapki was included in the bin folder of the Apache 1.1 plugin zip you extracted. orapki wallet create -wallet /oracle/secure/my-wallet -auto_login_only orapki wallet add -wallet /oracle/secure/my-wallet -trusted_cert -cert MyOwnSelfCA.cer.der -auto_login_only Finally change the httpd.conf to reflect that we want the WebLogic Apache plug-in to use HTTPS/SSL and not just plain HTTP. <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16101    SecureProxy ON    WLSSLWallet /oracle/secure/my-wallet    WLLogFile /tmp/wl-proxy.log </IfModule> Then restart Apache once more and you can go back to the browser to test the communication. Opening the URL https://irm.company.com/irm_rights will proxy your request to the WebLogic server at https://irm.company.internal:16101/irm_rights. At this point you have a fully functional Oracle IRM service, the next step is to create a sealed document and test the entire system.

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5 Part 1: Table per Hierarchy (TPH)

    - by mortezam
    A simple strategy for mapping classes to database tables might be “one table for every entity persistent class.” This approach sounds simple enough and, indeed, works well until we encounter inheritance. Inheritance is such a visible structural mismatch between the object-oriented and relational worlds because object-oriented systems model both “is a” and “has a” relationships. SQL-based models provide only "has a" relationships between entities; SQL database management systems don’t support type inheritance—and even when it’s available, it’s usually proprietary or incomplete. There are three different approaches to representing an inheritance hierarchy: Table per Hierarchy (TPH): Enable polymorphism by denormalizing the SQL schema, and utilize a type discriminator column that holds type information. Table per Type (TPT): Represent "is a" (inheritance) relationships as "has a" (foreign key) relationships. Table per Concrete class (TPC): Discard polymorphism and inheritance relationships completely from the SQL schema.I will explain each of these strategies in a series of posts and this one is dedicated to TPH. In this series we'll deeply dig into each of these strategies and will learn about "why" to choose them as well as "how" to implement them. Hopefully it will give you a better idea about which strategy to choose in a particular scenario. Inheritance Mapping with Entity Framework Code FirstAll of the inheritance mapping strategies that we discuss in this series will be implemented by EF Code First CTP5. The CTP5 build of the new EF Code First library has been released by ADO.NET team earlier this month. EF Code-First enables a pretty powerful code-centric development workflow for working with data. I’m a big fan of the EF Code First approach, and I’m pretty excited about a lot of productivity and power that it brings. When it comes to inheritance mapping, not only Code First fully supports all the strategies but also gives you ultimate flexibility to work with domain models that involves inheritance. The fluent API for inheritance mapping in CTP5 has been improved a lot and now it's more intuitive and concise in compare to CTP4. A Note For Those Who Follow Other Entity Framework ApproachesIf you are following EF's "Database First" or "Model First" approaches, I still recommend to read this series since although the implementation is Code First specific but the explanations around each of the strategies is perfectly applied to all approaches be it Code First or others. A Note For Those Who are New to Entity Framework and Code-FirstIf you choose to learn EF you've chosen well. If you choose to learn EF with Code First you've done even better. To get started, you can find a great walkthrough by Scott Guthrie here and another one by ADO.NET team here. In this post, I assume you already setup your machine to do Code First development and also that you are familiar with Code First fundamentals and basic concepts. You might also want to check out my other posts on EF Code First like Complex Types and Shared Primary Key Associations. A Top Down Development ScenarioThese posts take a top-down approach; it assumes that you’re starting with a domain model and trying to derive a new SQL schema. Therefore, we start with an existing domain model, implement it in C# and then let Code First create the database schema for us. However, the mapping strategies described are just as relevant if you’re working bottom up, starting with existing database tables. I’ll show some tricks along the way that help you dealing with nonperfect table layouts. Let’s start with the mapping of entity inheritance. -- The Domain ModelIn our domain model, we have a BillingDetail base class which is abstract (note the italic font on the UML class diagram below). We do allow various billing types and represent them as subclasses of BillingDetail class. As for now, we support CreditCard and BankAccount: Implement the Object Model with Code First As always, we start with the POCO classes. Note that in our DbContext, I only define one DbSet for the base class which is BillingDetail. Code First will find the other classes in the hierarchy based on Reachability Convention. public abstract class BillingDetail  {     public int BillingDetailId { get; set; }     public string Owner { get; set; }             public string Number { get; set; } } public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } public class CreditCard : BillingDetail {     public int CardType { get; set; }                     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } This object model is all that is needed to enable inheritance with Code First. If you put this in your application you would be able to immediately start working with the database and do CRUD operations. Before going into details about how EF Code First maps this object model to the database, we need to learn about one of the core concepts of inheritance mapping: polymorphic and non-polymorphic queries. Polymorphic Queries LINQ to Entities and EntitySQL, as object-oriented query languages, both support polymorphic queries—that is, queries for instances of a class and all instances of its subclasses, respectively. For example, consider the following query: IQueryable<BillingDetail> linqQuery = from b in context.BillingDetails select b; List<BillingDetail> billingDetails = linqQuery.ToList(); Or the same query in EntitySQL: string eSqlQuery = @"SELECT VAlUE b FROM BillingDetails AS b"; ObjectQuery<BillingDetail> objectQuery = ((IObjectContextAdapter)context).ObjectContext                                                                          .CreateQuery<BillingDetail>(eSqlQuery); List<BillingDetail> billingDetails = objectQuery.ToList(); linqQuery and eSqlQuery are both polymorphic and return a list of objects of the type BillingDetail, which is an abstract class but the actual concrete objects in the list are of the subtypes of BillingDetail: CreditCard and BankAccount. Non-polymorphic QueriesAll LINQ to Entities and EntitySQL queries are polymorphic which return not only instances of the specific entity class to which it refers, but all subclasses of that class as well. On the other hand, Non-polymorphic queries are queries whose polymorphism is restricted and only returns instances of a particular subclass. In LINQ to Entities, this can be specified by using OfType<T>() Method. For example, the following query returns only instances of BankAccount: IQueryable<BankAccount> query = from b in context.BillingDetails.OfType<BankAccount>() select b; EntitySQL has OFTYPE operator that does the same thing: string eSqlQuery = @"SELECT VAlUE b FROM OFTYPE(BillingDetails, Model.BankAccount) AS b"; In fact, the above query with OFTYPE operator is a short form of the following query expression that uses TREAT and IS OF operators: string eSqlQuery = @"SELECT VAlUE TREAT(b as Model.BankAccount)                       FROM BillingDetails AS b                       WHERE b IS OF(Model.BankAccount)"; (Note that in the above query, Model.BankAccount is the fully qualified name for BankAccount class. You need to change "Model" with your own namespace name.) Table per Class Hierarchy (TPH)An entire class hierarchy can be mapped to a single table. This table includes columns for all properties of all classes in the hierarchy. The concrete subclass represented by a particular row is identified by the value of a type discriminator column. You don’t have to do anything special in Code First to enable TPH. It's the default inheritance mapping strategy: This mapping strategy is a winner in terms of both performance and simplicity. It’s the best-performing way to represent polymorphism—both polymorphic and nonpolymorphic queries perform well—and it’s even easy to implement by hand. Ad-hoc reporting is possible without complex joins or unions. Schema evolution is straightforward. Discriminator Column As you can see in the DB schema above, Code First has to add a special column to distinguish between persistent classes: the discriminator. This isn’t a property of the persistent class in our object model; it’s used internally by EF Code First. By default, the column name is "Discriminator", and its type is string. The values defaults to the persistent class names —in this case, “BankAccount” or “CreditCard”. EF Code First automatically sets and retrieves the discriminator values. TPH Requires Properties in SubClasses to be Nullable in the Database TPH has one major problem: Columns for properties declared by subclasses will be nullable in the database. For example, Code First created an (INT, NULL) column to map CardType property in CreditCard class. However, in a typical mapping scenario, Code First always creates an (INT, NOT NULL) column in the database for an int property in persistent class. But in this case, since BankAccount instance won’t have a CardType property, the CardType field must be NULL for that row so Code First creates an (INT, NULL) instead. If your subclasses each define several non-nullable properties, the loss of NOT NULL constraints may be a serious problem from the point of view of data integrity. TPH Violates the Third Normal FormAnother important issue is normalization. We’ve created functional dependencies between nonkey columns, violating the third normal form. Basically, the value of Discriminator column determines the corresponding values of the columns that belong to the subclasses (e.g. BankName) but Discriminator is not part of the primary key for the table. As always, denormalization for performance can be misleading, because it sacrifices long-term stability, maintainability, and the integrity of data for immediate gains that may be also achieved by proper optimization of the SQL execution plans (in other words, ask your DBA). Generated SQL QueryLet's take a look at the SQL statements that EF Code First sends to the database when we write queries in LINQ to Entities or EntitySQL. For example, the polymorphic query for BillingDetails that you saw, generates the following SQL statement: SELECT  [Extent1].[Discriminator] AS [Discriminator],  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift],  [Extent1].[CardType] AS [CardType],  [Extent1].[ExpiryMonth] AS [ExpiryMonth],  [Extent1].[ExpiryYear] AS [ExpiryYear] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] IN ('BankAccount','CreditCard') Or the non-polymorphic query for the BankAccount subclass generates this SQL statement: SELECT  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] = 'BankAccount' Note how Code First adds a restriction on the discriminator column and also how it only selects those columns that belong to BankAccount entity. Change Discriminator Column Data Type and Values With Fluent API Sometimes, especially in legacy schemas, you need to override the conventions for the discriminator column so that Code First can work with the schema. The following fluent API code will change the discriminator column name to "BillingDetailType" and the values to "BA" and "CC" for BankAccount and CreditCard respectively: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) {     modelBuilder.Entity<BillingDetail>()                 .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue("BA"))                 .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue("CC")); } Also, changing the data type of discriminator column is interesting. In the above code, we passed strings to HasValue method but this method has been defined to accepts a type of object: public void HasValue(object value); Therefore, if for example we pass a value of type int to it then Code First not only use our desired values (i.e. 1 & 2) in the discriminator column but also changes the column type to be (INT, NOT NULL): modelBuilder.Entity<BillingDetail>()             .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue(1))             .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue(2)); SummaryIn this post we learned about Table per Hierarchy as the default mapping strategy in Code First. The disadvantages of the TPH strategy may be too serious for your design—after all, denormalized schemas can become a major burden in the long run. Your DBA may not like it at all. In the next post, we will learn about Table per Type (TPT) strategy that doesn’t expose you to this problem. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • CodePlex Daily Summary for Thursday, June 30, 2011

    CodePlex Daily Summary for Thursday, June 30, 2011Popular ReleasesASP.NET Comet Ajax Library (Reverse Ajax - Server Push): Reverse Ajax Samples v1.53: 16 Comprehensive ASP.NET Ajax / Reverse Ajax / WCF / MVC / Mono samplesReactive Extensions - Extensions (Rxx): Rxx 1.1: What's NewRelated Work Items Please read the latest release notes for details about what's new. About LabsAll "Labs" downloads include the Rxx.dll assembly, so only a single download is required. To start RxxLabs.exe, right-mouse click and select Run as Administrator; otherwise, do not run the Reactive WebClient lab because it will crash the program. RxxLabs.exe requires administrator privileges for the Reactive WebClient lab to register a local HTTP port. To launch the Silverlight labs...CommonLibrary.NET: CommonLibrary.NET - 0.9.7 Final: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. Samples in <root>\src\Lib\CommonLibrary.NET\Samples CommonLibrary.NET 0.9.7Documentation 6738 6503 New 6535 Enhancements 6759 6748 6583 6737datajs - JavaScript Library for data-centric web applications: datajs version 1.0.0: datajs is a cross-browser and UI agnostic JavaScript library that enables data-centric web applications with the following features: OData client that enables CRUD operations including batching and metadata support using both ATOM and JSON payloads. Single store abstraction that provides a common API on top of HTML5 local storage technologies. Data cache component that allows reading data ranges from a collection and storing them locally to reduce the number of network requests. Changes...Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.4.4: Fix for http://coding4fun.codeplex.com/workitem/6869 was incomplete. Back button wouldn't return app bar. Corrected now. High impact bugSiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.0.528.279): Added keyboard shortcuts: - Cut (CTRL+X) - Copy (CTRL+C) - Paste (CTRL+V) - Delete (CTRL+D) - Move up (CTRL+UP ARROW) - Move down (CTRL+DOWN ARROW) Added ability to save/load SiteMap from/to a Xml file on disk Bug fix: - Connect to a server through the status bar was throwing error "Object Reference not set to an instance of an object" - Rename TreeNode.Name after changing TreeNode.TextMicrosoft - Domain Oriented N-Layered .NET 4.0 App Sample: V2.01 ALPHA N-Layered SampleApp .NET 4.0 and EF4.1: V2.0.01 - ALPHARequired Software (Microsoft Base Software needed for Development environment) Visual Studio 2010 RTM & .NET 4.0 RTM (Final Versions) Expression Blend 4 SQL Server 2008 R2 Express/Standard/Enterprise Unity Application Block 2.0 - Published May 5th 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=2D24F179-E0A6-49D7-89C4-5B67D939F91B&displaylang=en http://unity.codeplex.com/releases/view/31277 PEX & MOLES 0.94.51023.0, 29/Oct/2010 - Visual Studio 2010 Power ...Mosaic Project: Mosaic Alpha build 261: - Fixed crash when pinning applications in x64 OS - Added Hub to video widget. It shows videos from Video library (only .wmv and .avi). Can work slow if there are too much files. - Fixed some issues with scrolling - Fixed bug with html widgets - Fixed bug in Gmail widget - Added html today widget missed in previous release - Now Mosaic saves running widgets if you restarting from optionsEnhSim: EnhSim 2.4.9 BETA: 2.4.9 BETAThis release supports WoW patch 4.2 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in some of th....NET Reflector Add-Ins: Reflector V7 Add-Ins: All the add-ins compiled for Reflector V7TerrariViewer: TerrariViewer v4.1 [4.0 Bug Fixes]: Version 4.1 ChangelogChanged how users will Open Player files (This change makes it much easier) This allowed me to remove the "Current player file" labels that were present Changed file control icons Added submit bug button Various Bug Fixes Fixed crashes related to clicking on buffs before a character is loaded Fixed crashes related to selecting "No Buff" when choosing a new buff Fixed crashes related to clicking on a "Max" button on the buff tab before a character is loaded Cor...AcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta8: ??AcDown???????????????,?????????????????????。????????????????????,??Acfun、Bilibili、???、???、?????,???????????、???????。 AcDown???????????????????????????,???,???????????????????。 AcDown???????C#??,?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ??v3.0 Beta8 ?? ??????????????? ???????????????(??????????) ???????...BlogEngine.NET: BlogEngine.NET 2.5: Get DotNetBlogEngine for 3 Months Free! Click Here for More Info 3 Months FREE – BlogEngine.NET Hosting – Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take a look at the Upgrading to BlogEngine.NET 2.5 instructions. To get started, be sure to check out our installatio...PHP Manager for IIS: PHP Manager 1.2 for IIS 7: This release contains all the functionality available in 62183 plus the following additions: Command Line Support via PowerShell - now it is possible to manage and script PHP installations on IIS by using Windows PowerShell. More information is available at Managing PHP installations with PHP Manager command line. Detection and alert when using local PHP handler - if a web site or a directory has a local copy of PHP handler mapping then the configuration changes made on upper configuration ...MiniTwitter: 1.71: MiniTwitter 1.71 ???? ?? OAuth ???????????? ????????、??????????????????? ???????????????????????SizeOnDisk: 1.0.10.0: Fix: issue 327: size format error when save settings Fix: some UI bindings trouble (sorting, refresh) Fix: user settings file deletion when corrupted Feature: TreeView virtualization (better speed with many folders) Feature: New file type DataGrid column Feature: In KByte view, show size of file < 1024B and > 0 with 3 decimal Feature: New language: Italian Task: Cleanup for speedRawr: Rawr 4.2.0: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr AddonWe now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including bag and bank items) like Char...N2 CMS: 2.2: * Web platform installer support available ** Nuget support available What's newDinamico Templates (beta) - an MVC3 & Razor based template pack using the template-first! development paradigm Boilerplate CSS & HTML5 Advanced theming with css comipilation (concrete, dark, roadwork, terracotta) Template-first! development style Content, news, listing, slider, image sizes, search, sitemap, globalization, youtube, google map Display Tokens - replaces text tokens with rendered content (usag...KinectNUI: Jun 25 Alpha Release: Initial public version. No installer needed, just run the EXE.Terraria World Viewer: Version 1.5: Update June 24th Made compatible with the new tiles found in Terraria 1.0.5New Projects{Adjunct} functionality for the .NET framework: A project to provide Ingots for the .NET framework.3Webee.net: 3Webee.net is the First Navigator dedicated to Web.3.0 by P2P. Developped in C# for DotNet-3.5 or Mono.net, compatible with Linux ready. Website.fr : http://3webee.net/ Download Win32 : http://3webee.net/Download/3Webee.net.beta.0.0.Win32.exe AB Donor Choose Planner: The goal of this doantion planner is to allow you to coordinate the completion of one or more Donors Choose projects. The tool gives you a portfolio view of the donations you would like to invest in by targeting your investments in a location/regional focused manner.appperu1: asdasdsaddas: ColinTestingasdfFindClone: Find clone files, find duplicate files, remove duplicate filesfirstcpapp: This is my appForecast Parser: The NOAA hosts forecast data accessible over the web. These libraries download and parse seven day hourly forecast data and encapsulates the data in an easy to use class.Future Apple Osx: Future Apple Osx, Is a free operating system to use. It was built from the ground up with the help of Cosmos. It is free to use and download. So please check it out today.Glimpse: Glimpse is a web debugger and diagnostics tool for ASP.NET and ASP.NET MVC. You can find out more at getGlimpse.comiAdm: ?? iToday ??? wince 6 R2 ?????ImagineCup Worldwide Finals Tracker: An open-source Windows Phone application that is used to track the events going on at the ImagineCup Worldwide Finals.John Owl: John OwlLAM - Local Area Messaging: A VB.NET local area chat application. Finds the lan clients and communicates through the lan with old Ms-Winsock Interop.MessyBrain: Organize your tasks in an efficient way. Assign tasks to members of your team. Create workflows for your team. Written in C# and ASP.NET MVC. Why did I start this project? Making mistakes is part of the learning process. They can be painful especially when they are made at work; there’s a cost attached to it. Why not start a project on my own? Mistakes will be less painful (only my ego will be damaged), I learn something new and there isn’t a cost attached to it. I can share the code ...Navigation Light Toolkit: Navigation Light add support for View Navigation in WPF and SilverlightRight Click Calculator: a mini calculator with numpad that opens in a dialogbox. it can combined with a textbox. bir textbox üzerinde sag tus ile açabileceginiz ufak bir hesap makinesi örnegi.Shaaps & Ladders: It's a modern software implementation of the popular classic board game Snakes & Ladders. The Bengali translation of "Snakes" pronounces "Shaaps" and thus the name of the game is such. The game is being developed on top of the Shaaps & Ladders GDK. Source for both are released.SharePoint PowerShell Scripts: Usefull PowerShell scripts written for SharePoint that helps organizations with governance.Smith Web Tools: Smith Web Tools are some useful controls to help to build web application. They are written in pure JavaScript and CSS without introducing any other JavaScript framework. At present, the Smith Calendar, Smith Editor, and Smith Dialog are available.SSAS Query Log Decoder & Analyzer: The Crisp description for the project will be "CUBE FOR CUBE". This is an end-to-end BI solution for analyzing the Query log created by SSAS Server. The Query Log data is loaded into a dimensional model and a cube is built on top of it for analysis of cube usage.takcandmansys: takcandmansysTestHG1: TestHG1TESTProjHG: TESTProjHGTestTFS1: TestTFS1TESTTFSAAA: TESTTFSAAATower: tower showTransit Feed Generator: Library for help the integration with Google Transit. This library generates the zipped file with data for Google Transit Feed, in the GTFS formatVRE Collaborator Search Kit for SharePoint 2010: VRE Collaborator Search Kit for SharePoint 2010VRE Content Archiving Kit for SharePoint 2010: VRE Content Archiving Kit for SharePoint 2010VRE Document Review Workflow Kit for SharePoint 2010: VRE Document Review Workflow Kit for SharePoint 2010 VRE Literature Review Kit for SharePoint 2010: VRE Literature Review Kit for SharePoint 2010 VRE Researcher and Project Templates for SharePoint 2010: VRE Researcher and Project Templates for SharePoint 2010 VRE RSS Feeds Kit for SharePoint 2010: VRE RSS Feeds Kit for SharePoint 2010 VRE User Administration (FBA) Kit for SharePoint 2010: VRE User Administration (FBA) Kit for SharePoint 2010 WebPart Collapser: WebPart Collapser is a lightweight, customizable jQuery plugin for SharePoint 2007 that allows visitors to expand/collapse WebParts. Through the use of cookies, the collapsed state of any webparts will be saved and collapsed each time a user visits a page. WPF Hex Editor: hex editor which is created with WPF with a xaml designed UI.Xoorscript: A proprietary scripting language created by Jared Thomson for the purpose of script defining an easy to use make system. The plans for this project are minimal for now, but if things go well I may expand it. This is low priority for me.

    Read the article

  • How to Visualize your Audit Data with BI Publisher?

    - by kanichiro.nishida
      Do you know how many reports on your BI Publisher server are accessed yesterday ? Or, how many users accessed to the reports yesterday, or what are the average number of the users accessed to the reports during the week vs. weekend or morning vs. afternoon ? With BI Publisher 11G, now you can audit your user’s reports access and understand the state of the reporting environment at your server, each user, or each report level. At the previous post I’ve talked about what the BI Publisher’s auditing functionality and how to enable it so that BI Publisher can start collecting such data. (How to Audit and Monitor BI Publisher Reports Access?)Now, how can you visualize such auditing data to have a better understanding and gain more insights? With Fusion Middleware Audit Framework you have an option to store the auditing data into a database instead of a log file, which is the default option. Once you enable the database storage option, that means you have your auditing data (or, user report access data) in your database tables, now no brainer, you can start visualize the data, create reports, analyze, and share with BI Publisher. So, first, let’s take a look on how to enable the database storage option for the auditing data. How to Feed the Auditing Data into Database First you need to create a database schema for Fusion Middleware Audit Framework with RCU (Repository Creation Utility). If you have already installed BI Publisher 11G you should be familiar with this RCU. It creates any database schema necessary to run any Fusion Middleware products including BI stuff. And you can use the same RCU that you used for your BI or BI Publisher installation to create this Audit schema. Create Audit Schema with RCU Here are the steps: Go to $RCU_HOME/bin and execute the ‘rcu’ command Choose Create at the starting screen and click Next. Enter your database details and click Next. Choose the option to create a new prefix, for example ‘BIP’, ‘KAN’, etc. Select 'Audit Services' from the list of schemas. Click Next and accept the tablespace creation. Click Finish to start the process. After this, there should be following three Audit related schema created in your database. <prefix>_IAU (e.g. KAN_IAU) <prefix>_IAU_APPEND (e.g. KAN_IAU_APPEND) <prefix>_IAU_VIEWER (e.g. KAN_IAU_VIEWER) Setup Datasource at WebLogic After you create a database schema for your auditing data, now you need to create a JDBC connection on your WebLogic Server so the Audit Framework can access to the database schema that was created with the RCU with the previous step. Connect to the Oracle WebLogic Server administration console: http://hostname:port/console (e.g. http://report.oracle.com:7001/console) Under Services, click the Data Sources link. Click ‘Lock & Edit’ so that you can make changes Click New –> ‘Generic Datasource’ to create a new data source. Enter the following details for the new data source:  Name: Enter a name such as Audit Data Source-0.  JNDI Name: jdbc/AuditDB  Database Type: Oracle  Click Next and select ‘Oracle's Driver (Thin XA) Versions: 9.0.1 or later’ as Database Driver (if you’re using Oracle database), and click Next. The Connection Properties page appears. Enter the following information: Database Name: Enter the name of the database (SID) to which you will connect. Host Name: Enter the hostname of the database.  Port: Enter the database port.  Database User Name: This is the name of the audit schema that you created in RCU. The suffix is always IAU for the audit schema. For example, if you gave the prefix as ‘BIP’, then the schema name would be ‘KAN_IAU’.  Password: This is the password for the audit schema that you created in RCU.   Click Next. Accept the defaults, and click Test Configuration to verify the connection. Click Next Check listed servers where you want to make this JDBC connection available. Click ‘Finish’ ! After that, make sure you click ‘Activate Changes’ at the left hand side top to take the new JDBC connection in effect. Register your Audit Data Storing Database to your Domain Finally, you can register the JNDI/JDBC datasource as your Auditing data storage with Fusion Middleware Control (EM). Here are the steps: 1. Login to Fusion Middleware Control 2. Navigate to Weblogic Domain, right click on ‘bifoundation…..’, select Security, then Audit Store. 3. Click the searchlight icon next to the Datasource JNDI Name field. 4.Select the Audit JNDI/JDBC datasource you created in the previous step in the pop-up window and click OK. 5. Click Apply to continue. 6. Restart the whole WebLogic Servers in the domain. After this, now the BI Publisher should start feeding all the auditing data into the database table called ‘IAU_BASE’. Try login to BI Publisher and open a couple of reports, you should see the activity audited in the ‘IAU_BASE’ table. If not working, you might want to check the log file, which is located at $BI_HOME/user_projects/domains/bifoundation_domain/servers/AdminServer/logs/AdminServer-diagnostic.log to see if there is any error. Once you have the data in the database table, now, it’s time to visualize with BI Publisher reports! Create a First BI Publisher Auditing Report Register Auditing Datasource as JNDI datasource First thing you need to do is to register the audit datasource (JNDI/JDBC connection) you created in the previous step as JNDI data source at BI Publisher. It is a JDBC connection registered as JNDI, that means you don’t need to create a new JDBC connection by typing the connection URL, username/password, etc. You can just register it using the JNDI name. (e.g. jdbc/AuditDB) Login to BI Publisher as Administrator (e.g. weblogic) Go to Administration Page Click ‘JNDI Connection’ under Data Sources and Click ‘New’ Type Data Source Name and JNDI Name. The JNDI Name is the one you created in the WebLogic Console as the auditing datasource. (e.g. jdbc/AuditDB) Click ‘Test Connection’ to make sure the datasource connection works. Provide appropriate roles so that the report developers or viewers can share this data source to view reports. Click ‘Apply’ to save. Create Data Model Select Data Model from the tool bar menu ‘New’ Set ‘Default Data Source’ to the audit JNDI data source you have created in the previous step. Select ‘SQL Query’ for your data set Use Query Builder to build a query or just type a sql query. Either way, the table you want to report against is ‘IAU_BASE’. This IAU_BASE table contains all the auditing data for other products running on the WebLogic Server such as JPS, OID, etc. So, if you care only specific to BI Publisher then you want to filter by using  ‘IAU_COMPONENTTYPE’ column which contains the product name (e.g. ’xmlpserver’ for BI Publisher). Here is my sample sql query. select     "IAU_BASE"."IAU_COMPONENTTYPE" as "IAU_COMPONENTTYPE",      "IAU_BASE"."IAU_EVENTTYPE" as "IAU_EVENTTYPE",      "IAU_BASE"."IAU_EVENTCATEGORY" as "IAU_EVENTCATEGORY",      "IAU_BASE"."IAU_TSTZORIGINATING" as "IAU_TSTZORIGINATING",    to_char("IAU_TSTZORIGINATING", 'YYYY-MM-DD') IAU_DATE,    to_char("IAU_TSTZORIGINATING", 'DAY') as IAU_DAY,    to_char("IAU_TSTZORIGINATING", 'HH24') as IAU_HH24,    to_char("IAU_TSTZORIGINATING", 'WW') as IAU_WEEK_OF_YEAR,      "IAU_BASE"."IAU_INITIATOR" as "IAU_INITIATOR",      "IAU_BASE"."IAU_RESOURCE" as "IAU_RESOURCE",      "IAU_BASE"."IAU_TARGET" as "IAU_TARGET",      "IAU_BASE"."IAU_MESSAGETEXT" as "IAU_MESSAGETEXT",      "IAU_BASE"."IAU_FAILURECODE" as "IAU_FAILURECODE",      "IAU_BASE"."IAU_REMOTEIP" as "IAU_REMOTEIP" from    "KAN3_IAU"."IAU_BASE" "IAU_BASE" where "IAU_BASE"."IAU_COMPONENTTYPE" = 'xmlpserver' Once you saved a sample XML for this data model, now you can create a report with this data model. Create Report Now you can use one of the BI Publisher’s layout options to design the report layout and visualize the auditing data. I’m a big fan of Online Layout Editor, it’s just so easy and simple to create reports, and on top of that, all the reports created with Online Layout Editor has the Interactive View with automatic data linking and filtering feature without any setting or coding. If you haven’t checked the Interactive View or Online Layout Editor you might want to check these previous blog posts. (Interactive Reporting with BI Publisher 11G, Interactive Master Detail Report Just A Few Clicks Away!) But of course, you can use other layout design option such as RTF template. Here are some sample screenshots of my report design with Online Layout Editor.     Visualize and Gain More Insights about your Customers (Users) ! Now you can visualize your auditing data to have better understanding and gain more insights about your reporting environment you manage. It’s been actually helping me personally to answer the  questios like below.  How many reports are accessed or opened yesterday, today, last week ? Who is accessing which report at what time ? What are the time windows when the most of the reports access happening ? What are the most viewed reports ? Who are the active users ? What are the # of reports access or user access trend for the last month, last 6 months, last 12 months, etc ? I was talking with one of the best concierge in the world at this hotel the other day, and he was telling me that the best concierge knows about their customers inside-out therefore they can provide a very private service that is customized to each customer to meet each customer’s specific needs. Well, this is true when it comes to how to administrate and manage your reporting environment, right ? The best way to serve your customers (report users, including both viewers and developers) is to understand how they use, what they use, when they use. Auditing is not just about compliance, but it’s the way to improve the customer service. The BI Publisher 11G Auditing feature enables just that to help you understand your customers better. Happy customer service, be the best reporting concierge! p.s. please share with us on what other information would be helpful for you for the auditing! Always, any feedback is a great value and inspiration for us!  

    Read the article

  • How to use wkhtmltopdf.exe in ASP.net

    - by David Murdoch
    After 10 hours and trying 4 other HTML to PDF tools I'm about ready to explode. wkhtmltopdf sounds like an excellent solution...the problem is that I can't execute a process with enough permissions from asp.net so... Process.Start("wkhtmltopdf.exe","http://www.google.com google.pdf"); starts but doesn't do anything. Is there an easy way to either: -a) allow asp.net to start processes (that can actually do something) or -b) compile/wrap/whatever wkhtmltopdf.exe into somthing I can use from C# like this: WkHtmlToPdf.Save("http://www.google.com", "google.pdf");

    Read the article

  • C# HttpListener Prefix issue with anything other than localhost

    - by jchristner
    Hello, I'm trying to use C# and HttpListener with a prefix of anything other than localhost and it fails (i.e. if I give it "server1", i.e. h t t p : / / l o c a l h o s t : 1 2 3 4 works, but h t t p : / / s e r v e r 1 : 1 2 3 4 fails (sorry, but the site thinks I'm trying to spam... the spaces are there because of that...) The code is... HttpListener listener = new HttpListener(); String prefix = "h t t p : / / s e r v e r 1 : 1 2 3 4/"; listener.Prefixes.Add(prefix); listener.Start(); The failure occurs on listener.Start() with an exception of "Access is denied.". Any ideas? Thanks!

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >