Search Results

Search found 62870 results on 2515 pages for 'usage data'.

Page 153/2515 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Limit CPU usage of a process

    - by jb
    I have a service running which periodically checks a folder for a file and then processes it. (Reads it, extracts the data, stores it in sql) So I ran it on a test box and it took a little longer thaan expected. The file had 1.6 million rows, and it was still running after 6 hours (then I went home). The problem is the box it is running on is now absolutely crippled - remote desktop was timing out so I cant even get on it to stop the process, or attach a debugger to see how far through etc. It's solidly using 90%+ CPU, and all other running services or apps are suffering. The code is (from memory, may not compile): List<ItemDTO> items = new List<ItemDTO>(); using (StreamReader sr = fileInfo.OpenText()) { while (!sr.EndOfFile) { string line = sr.ReadLine() try { string s = line.Substring(0,8); double y = Double.Parse(line.Substring(8,7)); //If the item isnt already in the collection, add it. if (items.Find(delegate(ItemDTO i) { return (i.Item == s); }) == null) items.Add(new ItemDTO(s,y)); } catch { /*Crash*/ } } return items; } - So I am working on improving the code (any tips appreciated). But it still could be a slow affair, which is fine, I've no problems with it taking a long time as long as its not killing my server. So what I want from you fine people is: 1) Is my code hideously un-optimized? 2) Can I limit the amount of CPU my code block may use? Cheers all

    Read the article

  • Is excessive DataTable usage bad?

    - by Justin R.
    I was recently asked to assist another team in building an ASP .NET website. They already have a significant amount of code written -- I was specifically asked build a few individual pages for the site. While exploring the code for the rest of the site, the amount of DataTables being constructed jumped out at me. Being a relatively new in the field, I've never worked on an application that utilizes a database as much as this site does, so I'm not sure how common this is. It seems that whenever data is queried from our database, the results are stored in a DataTable. This DataTable is then usually passed around by itself, or it's passed to a constructor. Classes that are initialized with a DataTable always assign the DataTable to a private/protected field, however only a few of these classes implement IDisposable. In fact, in the thousands of lines of code that I've browsed so far, I have yet to see the Dispose method called on a DataTable. If anything, this doesn't seem to be good OOP. Is this something that I should worry about? Or am I just paying more attention to detail than I should? Assuming you're most experienced developers than I am, how would you feel or react if someone who was just assigned to help you with your site approached you about this "problem"?

    Read the article

  • The correct usage of nested #pragma omp for directives

    - by GoldenLee
    The following code runs like a charm before OpenMP parallelization was applied. In fact, the following code was in a state of endless loop! I'm sure that's result from my incorrect use to the OpenMP directives. Would you please show me the correct way? Thank you very much. #pragma omp parallel for for (int nY = nYTop; nY <= nYBottom; nY++) { for (int nX = nXLeft; nX <= nXRight; nX++) { // Use look-up table for performance dLon = theApp.m_LonLatLUT.LonGrid()[nY][nX] + m_FavoriteSVISSRParams.m_dNadirLon; dLat = theApp.m_LonLatLUT.LatGrid()[nY][nX]; // If you don't want to use longitude/latitude look-up table, uncomment the following line //NOMGeoLocate.XYToGEO(dLon, dLat, nX, nY); if (dLon > 180 || dLat > 180) { continue; } if (Navigation.GeoToXY(dX, dY, dLon, dLat, 0) > 0) { continue; } // Skip void data scanline dY = dY - nScanlineOffset; // Compute coefficients as well as its four neighboring points' values nX1 = int(dX); nX2 = nX1 + 1; nY1 = int(dY); nY2 = nY1 + 1; dCx = dX - nX1; dCy = dY - nY1; dP1 = pIRChannelData->operator [](nY1)[nX1]; dP2 = pIRChannelData->operator [](nY1)[nX2]; dP3 = pIRChannelData->operator [](nY2)[nX1]; dP4 = pIRChannelData->operator [](nY2)[nX2]; // Bilinear interpolation usNomDataBlock[nY][nX] = (unsigned short)BilinearInterpolation(dCx, dCy, dP1, dP2, dP3, dP4); } }

    Read the article

  • jquery fail to retrieve accurate data from sibling field.

    - by i need help
    wonder what's wrong <table id=tblDomainVersion> <tr> <td>Version</td> <td>No of sites</td> </tr> <tr> <td class=clsversion>1.25</td> <td><a id=expanddomain>3 sites</a><span id=spanshowall></span></td> </tr> <tr> <td class=clsversion>1.37</td> <td><a id=expanddomain>7 sites</a><span id=spanshowall></span></td> </tr> </table> $('#expanddomain').click(function() { //the siblings result incorrect //select first row will work //select second row will no response var versionforselected= $('#expanddomain').parent().siblings("td.clsversion").text(); alert(versionforselected); $.ajax({ url: "ajaxquery.php", type: "POST", data: 'version='+versionforselected, timeout: 900000, success: function(output) { output= jQuery.trim(output); $('#spanshowall').html(output); }, }); });

    Read the article

  • Is there a more memory efficient way to search through a Core Data database?

    - by Kristian K
    I need to see if an object that I have obtained from a CSV file with a unique identifier exists in my Core Data Database, and this is the code I deemed suitable for this task: NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity; entity = [NSEntityDescription entityForName:@"ICD9" inManagedObjectContext:passedContext]; [fetchRequest setEntity:entity]; NSPredicate *pred = [NSPredicate predicateWithFormat:@"uniqueID like %@", uniqueIdentifier]; [fetchRequest setPredicate:pred]; NSError *err; NSArray* icd9s = [passedContext executeFetchRequest:fetchRequest error:&err]; [fetchRequest release]; if ([icd9s count] > 0) { for (int i = 0; i < [icd9s count]; i++) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init]; NSString *name = [[icd9s objectAtIndex:i] valueForKey:@"uniqueID"]; if ([name caseInsensitiveCompare:uniqueIdentifier] == NSOrderedSame && name != nil) { [pool release]; return [icd9s objectAtIndex:i]; } [pool release]; } } return nil; After more thorough testing it appears that this code is responsible for a huge amount of leaking in the app I'm writing (it crashes on a 3GS before making it 20 percent through the 1459 items). I feel like this isn't the most efficient way to do this, any suggestions for a more memory efficient way? Thanks in advance!

    Read the article

  • Silverlight 4 caching issue?

    - by DavidS
    I am currently experiencing a weird caching problem it would seem. When I load my data intially, I return all the data within given dates and my graph looks as follows: Then I filter the data to return a subset of the original data for the same date range (not that it matters) and I get the following view of my data: However, I intermittently get the following when I refresh the same filterd view of the data: One can see that not all the data gets cached but only some of it i.e. for 12 Dec 2010 and 5 dec 2010(not shown here). I've looked at my queries and the correct data is getting pulled out. It is only on the presentation layer i.e. on Mainpage.xaml.cs that this erroneous data seems to exist. I've stepped through the code and the data is corect through all the layers except on the presentation layer. Has anyone experienced this before? Is there some sort of caching going in the background that is keeping that data in the background as I've got browser caching off? I am using the LoadOperation in the callback method within the Load method of the DomainContext if that helps...

    Read the article

  • How can I synchronize one set of data with another?

    - by RenderIn
    I have an old database and a new database. The old records were converted to the new database recently. All our old applications continue to point to the old database, but the new applications point to the new database. Currently the old database is the only one being updated, so throughout the day the new database becomes out of sync. It is acceptable for the new database to be out of sync for a day, so until all our applications are pointed to the new database I just need to write a nightly cron job that will bring it up to date. I do not want to purge the new database and run the complete conversion script each night, as that would reduce uptime and would create a mess in our auditing of that table. I'm thinking about selecting all the data from the old database, converting it to the new database structure in memory, and then checking for the existence of each record before inserting it in the new database. After that's done, I'd select everything from the new database and check if it exists in the old one, and if not delete it. Is this the simplest way to do this?

    Read the article

  • Common block usage in Fortran

    - by Crystal
    I'm new to Fortran and just doing some simple things for work. And as a new programmer in general, not sure exactly how this works, so excuse me if my explanation or notation is not the best. At the top of the .F file there are common declarations. The person explaining it to me said think of it like a struct in C, and that they are global. Also in that same .F file, they have it declared with what type. So it's something like: COMMON SOMEVAR INTEGER*2 SOMEVAR And then when I actually see it being used in some other file, they declare local variables, (e.g. SOMEVAR_LOCAL) and depending on the condition, they set SOMEVAR_LOCAL = 1 or 0. Then there is another conditional later down the line that will say something like IF (SOMEVAR_LOCAL. eq. 1) SOMEVAR(PARAM) = 1; (Again I apologize if this is not proper Fortran, but I don't have access to the code right now). So it seems to me that there is a "struct" like variable called SOMEVAR that is of some length (2 bytes of data?), then there is a local variable that is used as a flag so that later down the line, the global struct SOMEVAR can be set to that value. But because there is (PARAM), it's like an array for that particular instance? Thanks. Sorry for my bad explanation, but hopefully you will understand what I am asking.

    Read the article

  • Is my fragment usage correct, seems to be slow on adnroid

    - by Robertoq
    My app structure is that i have a menu with 5 menu-point om the left side, and the content on the right side. MainActivity.xml <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" > <fragment android:id="@+id/fragmentMenu" android:name="com.example.FragmentMenu" android:layout_width="@dimen/MenuWidth" android:layout_height="match_parent" /> <LinearLayout android:id="@+id/content" android:layout_width="match_parent" android:layout_height="match_parent" android_layout_toRightOf="@+id/fragmentMenu" android:orientation="vertical"/> </RelativeLayout> MainActivity.java public class FragmentActivityMain extends FragmentActivity { @Override protected void onCreate(final Bundle arg0) { super.onCreate(arg0); setContentView(R.layout.fragment_activity_main); FragmentManager fm = getSupportFragmentManager(); FragmentMenu fragmentMenu = (FragmentMenu) fm.findFragmentById(R.id.fragmentMenu); fragmentMenu.init(); } } And certainly I have a FragmenMenu class, public class FragmentMenu extends ListFragment { @Override public View onCreateView(final LayoutInflater inflater, final ViewGroup container, final Bundle savedInstanceState) { View view = inflater.inflate(R.layout.fragment_menu, container, false); return view; } public init() { FragmentManager fm = getFragmentManager(); FragmentTransaction ft = fm.beginTransaction(); FragmentNowListView lw = new FragmentCarListView(); ft.add(R.id.content, lw); ft.commit(); } } The FragmentCarList is a simple list, now with static test data, only five items in a List My Problem: Slow. I tested the app on my phone (Galaxy S3) and I see white screen when app starting, around 0,5 second and this is the log: 10-29 11:43:44.093: D/dalvikvm(29710): GC_CONCURRENT freed 267K, 5% free 13903K/14535K, paused 10ms+2ms 10-29 11:43:44.133: D/dalvikvm(29710): GC_FOR_ALLOC freed 215K, 6% free 13896K/14663K, paused 12ms 10-29 11:43:44.233: D/dalvikvm(29710): GC_FOR_ALLOC freed 262K, 6% free 13901K/14663K, paused 12ms 10-29 11:43:44.258: D/dalvikvm(29710): GC_FOR_ALLOC freed 212K, 6% free 13897K/14663K, paused 13ms 10-29 11:43:44.278: D/dalvikvm(29710): GC_FOR_ALLOC freed 208K, 6% free 13897K/14663K, paused 12ms 10-29 11:43:44.328: D/dalvikvm(29710): GC_FOR_ALLOC freed 131K, 4% free 14098K/14663K, paused 12ms 10-29 11:43:44.398: D/dalvikvm(29710): GC_CONCURRENT freed 20K, 3% free 14559K/14919K, paused 1ms+4ms And when I tested on Xperia Ray, the whit screen appear longer time. How can I optimize my fragments? Thx

    Read the article

  • Need Google Map InfoWindow Hyperlink to Open Content in Overlay (Fusion Table Usage)

    - by McKev
    I have the following code established to render the map in my site. When the map is clicked, the info window pops up with a bunch of content including a hyperlink to open up a website with a form in it. I would like to utilize a function like fancybox to open up this link "form" in an overlay. I have read that fancybox doesn't support calling the function from within an iframe, and was wondering if there was a way to pass the link data to the DOM and trigger the fancybox (or another overlay option) in another way? Maybe a callback trick - any tips would be much appreciated! <style> #map-canvas { width:850px; height:600px; } </style> <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=true"></script> <script src="http://gmaps-utility-gis.googlecode.com/svn/trunk/fusiontips/src/fusiontips.js" type="text/javascript"></script> <script type="text/javascript"> var map; var tableid = "1nDFsxuYxr54viD_fuH7fGm1QRZRdcxFKbSwwRjk"; var layer; var initialLocation; var browserSupportFlag = new Boolean(); var uscenter = new google.maps.LatLng(37.6970, -91.8096); function initialize() { map = new google.maps.Map(document.getElementById('map-canvas'), { zoom: 4, mapTypeId: google.maps.MapTypeId.ROADMAP }); layer = new google.maps.FusionTablesLayer({ query: { select: "'Geometry'", from: tableid }, map: map }); //http://gmaps-utility-gis.googlecode.com/svn/trunk/fusiontips/docs/reference.html layer.enableMapTips({ select: "'Contact Name','Contact Title','Contact Location','Contact Phone'", from: tableid, geometryColumn: 'Geometry', suppressMapTips: false, delay: 500, tolerance: 8 }); ; // Try W3C Geolocation (Preferred) if(navigator.geolocation) { browserSupportFlag = true; navigator.geolocation.getCurrentPosition(function(position) { initialLocation = new google.maps.LatLng(position.coords.latitude,position.coords.longitude); map.setCenter(initialLocation); //Custom Marker var pinColor = "A83C0A"; var pinImage = new google.maps.MarkerImage("http://chart.apis.google.com/chart?chst=d_map_pin_letter&chld=%E2%80%A2|" + pinColor, new google.maps.Size(21, 34), new google.maps.Point(0,0), new google.maps.Point(10, 34)); var pinShadow = new google.maps.MarkerImage("http://chart.apis.google.com/chart?chst=d_map_pin_shadow", new google.maps.Size(40, 37), new google.maps.Point(0, 0), new google.maps.Point(12, 35)); new google.maps.Marker({ position: initialLocation, map: map, icon: pinImage, shadow: pinShadow }); }, function() { handleNoGeolocation(browserSupportFlag); }); } // Browser doesn't support Geolocation else { browserSupportFlag = false; handleNoGeolocation(browserSupportFlag); } function handleNoGeolocation(errorFlag) { if (errorFlag == true) { //Geolocation service failed initialLocation = uscenter; } else { //Browser doesn't support geolocation initialLocation = uscenter; } map.setCenter(initialLocation); } } google.maps.event.addDomListener(window, 'load', initialize); </script>

    Read the article

  • How to recover deleted files on ext3 fs

    - by Mike
    I have a drive which was using the ext3 filesystem. I am told that about 10Gb of data was deleted off the drive (probably via rm). The drive is currently mounted as read-only to preserve all data. Does anyone know of a method to restore some or all of the data? Also if it helps, the OS was Fedora. I've also been told that the data is mostly ASCII fortan source code and Matlab files. Conclusion I have finally managed to get the data back, and with the simplest means ever! After weeks of trying and failing to bring back much of any data, I brought someone in today to take a look at it and offer suggestions, he simply cd'd to the directory and everything was there! It was never lost in the first place!!! Needless to say I feel really dumb right now, but I learned quite a lot with this whole fiasco. At any rate, while I was looking through data forensics solutions, I found that the Autopsy, or more specifically the SleuthKit was the most helpful. So I will accept that as the final answer. I would also like to note for anyone that comes across this later on that the most up-voted (currently) answer by sekenre was also helpful and I learned a lot, but ultimately it did not help with the type (very many, and some being very large) of files I was dealing with. So thank to all you that provided suggestions and wish you all the best!

    Read the article

  • .NET Free memory usage (how to prevent overallocation / release memory to the OS)

    - by Ronan Thibaudau
    I'm currently working on a website that makes large use of cached data to avoid roundtrips. At startup we get a "large" graph (hundreds of thouthands of different kinds of objects). Those objects are retrieved over WCF and deserialized (we use protocol buffers for serialization) I'm using redgate's memory profiler to debug memory issues (the memory didn't seem to fit with how much memory we should need "after" we're done initializing and end up with this report Now what we can gather from this report is that: 1) Most of the memory .NET allocated is free (it may have been rightfully allocated during deserialisation, but now that it's free, i'd like for it to return to the OS) 2) Memory is fragmented (which is bad, as everytime i refresh the cash i need to redo the memory hungry deserialisation process and this, in turn creates large object that may throw an OutOfMemoryException due to fragmentation) 3) I have no clue why the space is fragmented, because when i look at the large object heap, there are only 30 instances, 15 object[] are directly attached to the GC and totally unrelated to me, 1 is a char array also attached directly to the GC Heap, the remaining 15 are mine but are not the cause of this as i get the same report if i comment them out in code. So my question is, what can i do to go further with this? I'm not really sure what to look for in debugging / tools as it seems my memory is fragmented, but not by me, and huge amounts of free spaces are allocated by .net , which i can't release. Also please make sure you understand the question well before answering, i'm not looking for a way to free memory within .net (GC.Collect), but to free memory that is already free in .net , to the system as well as to defragment said memory. Note that a slow solution is fine, if it's possible to manually defragment the large heap i'd be all for it as i can call it at the end of RefreshCache and it's ok if it takes 1 or 2 second to run. Thanks for your help! A few notes i forgot: 1) The project is a .net 2.0 website, i get the same results running it in a .net 4 pool, idem if i run it in a .net 4 pool and convert it to .net 4 and recompile. 2) These are results of a release build, so debug build can not be the issue. 3) And this is probably quite important, i do not get these issues at all in the webdev server, only in IIS, in the webdev i get memory consumption rather close to my actual consumption (well more, but not 5-10X more!)

    Read the article

  • ZFS Data Loss Scenarios

    - by Obtuse
    I'm looking toward building a largish ZFS Pool (150TB+), and I'd like to hear people experiences about data loss scenarios due to failed hardware, in particular, distinguishing between instances where just some data is lost vs. the whole filesystem (of if there even is such a distinction in ZFS). For example: let's say a vdev is lost due to a failure like an external drive enclosure losing power, or a controller card failing. From what I've read the pool should go into a faulted mode, but if the vdev is returned the pool should recover? or not? or if the vdev is partially damaged, does one lose the whole pool, some files, etc.? What happens if a ZIL device fails? Or just one of several ZILs? Truly any and all anecdotes or hypothetical scenarios backed by deep technical knowledge are appreciated! Thanks! Update: We're doing this on the cheap since we are a small business (9 people or so) but we generate a fair amount of imaging data. The data is mostly smallish files, by my count about 500k files per TB. The data is important but not uber-critical. We are planning to use the ZFS pool to mirror 48TB "live" data array (in use for 3 years or so), and use the the rest of the storage for 'archived' data. The pool will be shared using NFS. The rack is supposedly on a building backup generator line, and we have two APC UPSes capable of powering the rack at full load for 5 mins or so.

    Read the article

  • SQL SERVER – Retrieve and Explore Database Backup without Restoring Database – Idera virtual databas

    - by pinaldave
    I recently downloaded Idera’s SQL virtual database, and tested it. There are a few things about this tool which caught my attention. My Scenario It is quite common in real life that sometimes observing or retrieving older data is necessary; however, it had changed as time passed by. The full database backup was 40 GB in size, and, to restore it on our production server, it usually takes around 16 to 22 minutes, depending on the load server that is usually present. This range in time varies from one server to another as per the configuration of the computer. Some other issues we used to have are the following: When we try to restore a large 40-GB database, we needed at least that much space on our production server. Once in a while, we even had to make changes in the restored database, and use the said changed and restored database for our purpose, making it more time-consuming. My Solution I have heard a lot about the Idera’s SQL virtual database tool.. Well, right after we started to test this tool, we found out that it really delivers what it promises. Using this software was very easy and we were able to restore our database from backup in less than 2 minutes, sparing us from the usual longer time of 16–22 minutes. The needful was finished in a total of 10 minutes. Another interesting observation is that there is no need to have an additional space for restoring the database. For complete database restoration, the single additional MB on the drive is not required anymore. We can use the database in the same way as our regular database, and there is no need for any additional configuration and setup. Let us look at the most relevant points of this product based on my initial experience: Quick restoration of the database backup No additional space required for database restoration virtual database has no physical .MDF or .LDF The database which is restored is, in fact, the backup file converted in the virtual database. DDL and DML queries can be executed against this virtually restored database. Regular backup operation can be implemented against virtual database, creating a physical .bak file that can be used for future use. There was no observed degradation in performance on the original database as well the restored virtual database. Additional T-SQL queries can be let off on the virtual database. Well, this summarizes my quick review. And, as I was saying, I am very impressed with the product and I plan to explore it more. There are many features that I have noticed in this tool, which I think can be very useful if properly understood. I had taken a few screenshots using my demo database afterwards. Let us see what other things this tool can do besides the mentioned activities. I am surprised with its performance so I want to know how exactly this feature works, specifically in the matter of why it does not create any additional files and yet, it still allows update on the virtually restored database. I guess I will have to send an e-mail to the developers of Idera and try to figure this out from them. I think this tool is very useful, and it delivers a high level of performance way more than what I expected. Soon, I will write a review for additional uses of SQL virtual database.. If you are using SQL virtual database in your production environment, I am eager to learn more about it and your experience while using it. The ‘Virtual’ Part of virtual database When I set out to test this software, I thought virtual database had something to do with Hyper-V or visualization. In fact, the virtual database is a kind of database which shows up in your SQL Server Management Studio without actually restoring or even creating it. This tool creates a database in SSMS from the backup of the same database. The backup, however, works virtually the same way as original database. Potential Usage of virtual database: As soon as I described this tool to my teammate, I think his very first reaction was, “hey, if we have this then there is no need for log shipping.” I find his comment very interesting as log shipping is something where logs are moved to another server. In fact, there are no updates on the database from log; I would rather compare it with Snapshot Replication. In fact, whatever we use, snapshot replicated database can be similarly used and configured with virtual database. I totally believe that we can use it for reporting purpose. In fact, after this database was configured, I think the uses of this tool are unlimited. I will have to spend some more time studying it and will get back to you. Click on images to see larger images. virtual database Console Harddrive Space before virtual database Setup Attach Full Backup Screen Backup on Harddrive Attach Full Backup Screen with Settings virtual database Setup – less than 60 sec virtual database Setup – Online Harddrive Space after virtual database Setup Point in Time Recovery Option – Timeline View virtual database Summary No Performance Difference between Regular DB vs Virtual DB Please note that all SQL Server MVP gets free license of this software. Reference: Pinal Dave (http://blog.SQLAuthority.com), Idera (virtual database) Filed under: Database, Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, SQLAuthority News, T SQL, Technology Tagged: Idera

    Read the article

  • Building an ASP.Net 4.5 Web forms application - part 4

    - by nikolaosk
    ?his is the fourth post in a series of posts on how to design and implement an ASP.Net 4.5 Web Forms store that sells posters on line.There are 3 more posts in this series of posts.Please make sure you read them first.You can find the first post here. You can find the second post here. You can find the third post here.  In this new post we will build on the previous posts and we will demonstrate how to display the posters per category.We will add a ListView control on the PosterList.aspx and will bind data from the database. We will use the various templates.Then we will write code in the PosterList.aspx.cs to fetch data from the database.1) Launch Visual Studio and open your solution where your project lives2) Open the PosterList.aspx page. We will add some markup in this page. Have a look at the code below  <section class="posters-featured">                    <ul>                         <asp:ListView ID="posterList" runat="server"                            DataKeyNames="PosterID"                            GroupItemCount="3" ItemType="PostersOnLine.DAL.Poster" SelectMethod="GetPosters">                            <EmptyDataTemplate>                                      <table id="Table1" runat="server">                                            <tr>                                                  <td>We have no data.</td>                                            </tr>                                     </table>                              </EmptyDataTemplate>                              <EmptyItemTemplate>                                     <td id="Td1" runat="server" />                              </EmptyItemTemplate>                              <GroupTemplate>                                    <tr ID="itemPlaceholderContainer" runat="server">                                          <td ID="itemPlaceholder" runat="server"></td>                                    </tr>                              </GroupTemplate>                              <ItemTemplate>                                    <td id="Td2" runat="server">                                          <table>                                                <tr>                                                      <td>&nbsp;</td>                                                      <td>                                                <a href="PosterDetails.aspx?posterID=<%#:Item.PosterID%>">                                                    <img src="<%#:Item.PosterImgpath%>"                                                        width="100" height="75" border="1"/></a>                                             </td>                                            <td>                                                <a href="PosterDetails.aspx?posterID=<%#:Item.PosterID%>">                                                    <span class="PosterName">                                                        <%#:Item.PosterName%>                                                    </span>                                                </a>                                                            <br />                                                <span class="PosterPrice">                                                               <b>Price: </b><%#:String.Format("{0:c}", Item.PosterPrice)%>                                                </span>                                                <br />                                                        </td>                                                </tr>                                          </table>                                    </td>                              </ItemTemplate>                              <LayoutTemplate>                                    <table id="Table2" runat="server">                                          <tr id="Tr1" runat="server">                                                <td id="Td3" runat="server">                                                      <table ID="groupPlaceholderContainer" runat="server">                                                            <tr ID="groupPlaceholder" runat="server"></tr>                                                      </table>                                                </td>                                          </tr>                                          <tr id="Tr2" runat="server"><td id="Td4" runat="server"></td></tr>                                    </table>                              </LayoutTemplate>                        </asp:ListView>                    </ul>               </section>  3) We have a ListView control on the page called PosterList. I set the ItemType property to the Poster class and then the SelectMethod to the GetPosters method.  I will create this method later on.   (ItemType="PostersOnLine.DAL.Poster" SelectMethod="GetPosters")Then in the code below  I have the data-binding expression Item  available and the control becomes strongly typed.So when the user clicks on the link of the poster's category the relevant information will be displayed (photo,name and price)                                            <td>                                                <a href="PosterDetails.aspx?posterID=<%#:Item.PosterID%>">                                                    <img src="<%#:Item.PosterImgpath%>"                                                        width="100" height="75" border="1"/></a>                                             </td>4)  Now we need to write the simple method to populate the ListView control.It is called GetPosters method.The code follows   public IQueryable<Poster> GetPosters([QueryString("id")] int? PosterCatID)        {            PosterContext ctx = new PosterContext();            IQueryable<Poster> query = ctx.Posters;            if (PosterCatID.HasValue && PosterCatID > 0)            {                query = query.Where(p=>p.PosterCategoryID==PosterCatID);            }            return query;                    } This is a very simple method that returns information about posters related to the PosterCatID passed to it.I bind the value from the query string to the PosterCatID parameter at run time.This is all possible due to the QueryStringAttribute class that lives inside the System.Web.ModelBinding and gets the value of the query string variable id.5) I run my application and then click on the "Midfilders" link. Have a look at the picture below to see the results.  In the Site.css file I added some new CSS rules to make everything more presentable. .posters-featured {    width:840px;    background-color:#efefef;}.posters-featured   a:link, a:visited,    a:active, a:hover {        color: #000033;    }.posters-featured    a:hover {        background-color: #85c465;    }  6) I run the application again and this time I do not choose any category, I simply navigate to the PosterList.aspx page. I see all the posters since no query string was passed as a parameter.Have a look at the picture below   ?ake sure you place breakpoints in the code so you can see what is really going on.In the next post I will show you how to display poster details.Hope it helps!!!

    Read the article

  • Oracle data warehouse design - fact table acting as a dimension?

    - by Elizabeth
    THANKS: Both answers here are very helpful, but I could only pick one. I really appreciate the advice! our datawarehouse will be used more for workflow reports than traditional analytical reports. Our users care about "current picture" far more than history. (though history matters, too.) We are a government entity that does not have costs or related calculations. Mostly just counts of people within given locations and with related history. We are using Oracle, and I have found distinct advantage in using the star join whenever possible and would like to rearchitect everything to as closely resemble the star schema as is reasonable for our business uses. Speed in this DW is vital, and a number of tests have already proven the star schema approach to me. Our "person" table is key - it contains over 4 million records and will be the most frequently used source in queries. It can be seen at the center of a star with multiple dimensions (like age, gender, affiliation, location, etc.). It is a very LONG table, particularly when I join it to the address and contact information. However, it is more like a dimension table when we start looking at history. For example, there are two different history tables that have a person key pointing to the person table. One has over 20 million records and the other has almost 50 million and grows daily. Is this table a fact table or a dimension table? Can one work as both? If so, is that going to be a big performance problem? Is it common to query more off of a dimension than a fact? What happens if a DIFFERENT fact table that uses the person table as a dimension is actually only 60,000 records (much smaller.). I think my problem is that our data and use of it does not fit with the commonly use examples of star schemas. CLARIFICATION: Some good thoughts have been added below, but perhaps I left too much out to really explain well. Here's some more info: We handle a voter database. We don't have any measures except voter counts by various groups: voter counts by party, by age, by location; voter counts by ballot type and election, by ballot status and election, etc. We do have a "voting history" log as well as an activity audit log (change of address, party, etc.). We have information on which voters are election workers and all that related information. I figure I'll get to the peripheral stuff later. For now I'm focusing on our two major "business processes": voter registration(which IS a voter.) and election turnout. In the first, voter is a fact. In the second, voter is a dimension, along with party, election, and type of ballot. (and in case anyone is worried - no we don't know HOW people vote. Just that they do. LOL ) I hope that clarifies things a bit.

    Read the article

  • Flex/PHP/XML data issue

    - by reado
    I have built a simple application in Flex. When the application loads, a GET request is made to the xmlService.php file with parameters "fetchData=letters". This tells the PHP to return the XML code. In Flex Debug I can see the XML data being sent by the PHP to the flex client. What I need it to do is populate the first drop down box (id="letter") with this data, however nothing is being received by Flex. I added an Alert.show() to check what was being returned but when the application runs, the alert is blank. Can anyone help? Thanks in advance. Image: http://static.readescdn.com/misc/flex.gif // Flex <?xml version="1.0" encoding="utf-8"?> <s:WindowedApplication xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" width="300" height="300" creationComplete="windowedapplication1_creationCompleteHandler(event)"> <fx:Script> <![CDATA[ import mx.collections.ArrayCollection; import mx.controls.Alert; import mx.events.FlexEvent; import mx.rpc.events.FaultEvent; import mx.rpc.events.ResultEvent; import spark.events.IndexChangeEvent; protected function windowedapplication1_creationCompleteHandler(event:FlexEvent):void { var params:Object = {'fetchData':'letters'}; xmlService.send(params); } protected function xmlService_resultHandler(event:ResultEvent):void { var id:String = xmlService.lastResult.data.id.value; //Alert.show(xmlService.lastResult.data.id.value); if(id == 'letter') { letter.dataProvider = xmlService.lastResult.data.letter; letter.enabled = true; } else if(id == 'number') { number.dataProvider = xmlService.lastResult.data.number; number.enabled = true; submit.enabled = true; } else { submit.label = 'No Data!'; } } protected function xmlService_faultHandler(event:FaultEvent):void { Alert.show(event.fault.message); } protected function letter_changeHandler(event:IndexChangeEvent):void { var params:Object = {'fetchData':'numbers'}; xmlService.send(params); } ]]> </fx:Script> <fx:Declarations> <s:HTTPService id="xmlService" url="URL_GOES_HERE" method="POST" useProxy="false" resultFormat="e4x" result="xmlService_resultHandler(event)" fault="xmlService_faultHandler(event)"/> </fx:Declarations> <s:DropDownList x="94" y="10" id="letter" enabled="false" change="letter_changeHandler(event)" labelField="value"></s:DropDownList> <s:DropDownList x="94" y="39" id="number" enabled="false" labelField="value"></s:DropDownList> <s:Button x="115" y="68" label="Submit" id="submit" enabled="false"/> </s:WindowedApplication> // PHP <? if(isset($_POST['fetchData'])) { if($_POST['fetchData'] == 'letters') { $xml = '<data> <id value="letters"/> <letter label="Letter A" value="a"/> <letter label="Letter B" value="b"/> <letter label="Letter C" value="c"/> </data>'; } else if($_POST['fetchData'] == 'numbers') { $xml = '<data> <id value="letters"/> <number label="Number 1" value="1"/> <number label="Number 2" value="2"/> <number label="Number 3" value="3"/> </data>'; } else { $xml = '<data> <result value="'.$_POST['fetchData'].'"/> </data>'; } echo $xml; } else { echo '<data> <result value="NULL"/> </data>'; } ?>

    Read the article

  • ASP.NET MVC2 - usage of LINQ-generated class (validation problem)

    - by ile
    There are few things not clear to me about ASP.NET MV2. In database I have table Contacts with several fields, and there is an additional field XmlFields of which type is xml. In that field are stored additional description fields. There are 4 classes: Contact class which corresponds to Contact table and is defined by default when creating LINQ classes ContactListView class which inherits Contact class and has some additional properties ContactXmlView class that contains fields from XmlFields field ContactDetailsView class which merges ContactListView and ContactXmlView into one class and this one is used to display data in view pages ContactListView class has re-defined some properties from Contact class (so that I can add [Required] filter used for validation) - but I get warning message: 'ObjectTest.Models.Contacts.ContactListView.FirstName' hides inherited member 'SA.Model.Contact.FirstName'. Use the new keyword if hiding was intended. ContactDetailsView class is also used in a form when creating new contact and adding it to database. I am not sure if this is correct way, and the warning message confuses me a bit. Any advise about this? Thanks, Ile EDIT According to Jakob's instructions I tried it from scratch: [MetadataType(typeof(Person_Validation))] public partial class Person { } public class Person_Validation { [Required] string FirstName { get; set; } [Required] string LastName { get; set; } [Required] int Age { get; set; } } In Controller I have this: [HttpPost] public ActionResult Create(Person person, FormCollection collection) { if (ModelState.IsValid) { try { personRepository.Add(person); personRepository.Save(); } catch { return View(person); } } return RedirectToAction("Index"); } View: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Validate.Models.Person>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Create </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Create</h2> <% using (Html.BeginForm()) {%> <%= Html.ValidationSummary(true) %> <fieldset> <legend>Fields</legend> <div class="editor-label"> <%= Html.LabelFor(model => model.FirstName) %> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.FirstName) %> <%= Html.ValidationMessageFor(model => model.FirstName) %> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.LastName) %> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.LastName) %> <%= Html.ValidationMessageFor(model => model.LastName) %> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Age) %> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Age) %> <%= Html.ValidationMessageFor(model => model.Age) %> </div> <p> <input type="submit" value="Create" /> </p> </fieldset> <% } %> <div> <%= Html.ActionLink("Back to List", "Index") %> </div> </asp:Content> When posting new person with no values, nothing happens (page is just reloaded). When posting with some values, person is added to db. I have no idea what am I doing wrong.

    Read the article

  • Js (+Mootools) - Why my script use over 60% of processor?

    - by Misiur
    On this site - LINK - i need to use 3 banner scrollers (2x vertical + 1x horizontal). I've tried to do it in flash, but then everyone web browsers shut down, or suspended. Now i want to do it in JS (i use mootools). All data come from MySQL. Here's the complete code (even if you don't know mootools, You should understand it) global $wpdb; $table = $wpdb->prefix.'part'; $sql = "SELECT * FROM $table"; $q = $wpdb->get_results($sql); $g = 0; if($wpdb->num_rows > 0) { ?> <script type="text/javascript"> window.addEvent('load', function(){ var totall = 0; var totalr = 0; $$('#leftCont0 .contElement').each(function(el){ var img = new Asset.image(el.getFirst('a').getFirst('img').get('src')); totall += img.height; }); $$('#rightCont0 .contElement').each(function(el){ var img = new Asset.image(el.getFirst('a').getFirst('img').get('src')); totalr += img.height; }); $$('.leftCont').each(function(el){ var h = parseInt(el.get('id').substr(8)); el.setStyle('top', h * totall); }); $$('.rightCont').each(function(el){ var h = parseInt(el.get('id').substr(9)); el.setStyle('top', h * totalr); }); var total = new Array(totall, totalr); move.periodical(30, null, total); }); function move(num, num2) { var h = 0; var da = false; var target = null; $$('.leftCont').each(function(el){ var act = el.getStyle('top'); var n = parseInt(act)+1; el.setStyle('top', n+"px"); if(el.getStyle('top') < h) { h = parseInt(el.getStyle('top')); alert(h); } if(parseInt(el.getStyle('top')) > 400) { da = true; target = el; } }); if(da) { var n = h - num; target.setStyle('top', n+'px'); } h = 0; da = false; $$('.rightCont').each(function(el){ var act = el.getStyle('top'); var n = parseInt(act)+1; el.setStyle('top', n+"px"); if(el.getStyle('top') < h) { h = parseInt(el.getStyle('top')); alert(h); } if(parseInt(el.getStyle('top')) > 400) { da = true; target = el; } }); if(da) { var n = h - num2; target.setStyle('top', n+'px'); } } </script> <?php $g = 0; $l = 0; $r = 0; $leftContent = array(); $rightContent = array(); $leftHeight = 0; $rightHeight = 0; foreach($q as $q) { if(($g % 2) == 0) { $leftContent[$l] = '<div class="contElement"> <a href="'.$q->aurl.'"><img src="'.$q->imgurl.'" alt="Partner" /></a> </div>'; $lHeight = getimagesize($q->imgurl); $leftHeight .= $lHeight[1]; $l++; } else { $rightContent[$r] = '<div class="contElement"> <a href="'.$q->aurl.'"><img src="'.$q->imgurl.'" alt="Partner" /></a> </div>'; $rHeight = getimagesize($q->imgurl); $rightHeight .= $rHeight[1]; $r++; } $g++; } $quantity = ceil(400 / $leftHeight) + 1; for($i = 0; $i < $quantity; $i++) { $str = ""; for($j = 0; $j < sizeof($leftContent); $j++) { $str .= $leftContent[$j]; } $leftContainer[$i] = '<div class="leftCont" id="leftCont'.$i.'">'.$str.'</div>'; } $quantity = ceil(400 / $rightHeight) + 1; for($i = 0; $i < $quantity; $i++) { $str = ""; for($j = 0; $j < sizeof($rightContent); $j++) { $str .= $rightContent[$j]; } $rightContainer[$i] = '<div class="rightCont" id="rightCont'.$i.'">'.$str.'</div>'; } ?> <div id="pcl"> <?php for($i = 0; $i < sizeof($leftContainer); $i++) { echo $leftContainer[$i]; } ?> </div> <div id="pcr"> <?php for($i = 0; $i < sizeof($rightContainer); $i++) { echo $rightContainer[$i]; } ?> </div> <?php }

    Read the article

  • SQL Server – SafePeak “Logon Trigger” Feature for Managing Data Access

    - by pinaldave
    Lately I received an interesting question about the abilities of SafePeak for SQL Server acceleration software: Q: “I would like to use SafePeak to make my CRM application faster. It is an application we bought from some vendor, after a while it became slow and we can’t reprogram it. SafePeak automated caching sounds like an easy and good solution for us. But, in my application there are many servers and different other applications services that address its main database, and some even change data, and I feel that there is a chance that some servers that during the connection process we may miss some. Is there a way to ensure that SafePeak will be aware of all connections to the SQL Server, so its cache will remain intact?” Interesting question, as I remember that SafePeak (http://www.safepeak.com/Product/SafePeak-Overview) likes that all traffic to the database will go thru it. I decided to check out the features of SafePeak latest version (2.1) and seek for an answer there. A: Indeed I found SafePeak has a feature they call “Logon Trigger” and is designed for that purpose. It is located in the user interface, under: Settings -> SQL instances management  ->  [your instance]  ->  [Logon Trigger] tab. From here you activate / deactivate it and control a white-list of enabled server IPs and Login names that SafePeak will ignore them. Click to Enlarge After activation of the “logon trigger” Safepeak server is notified by the SQL Server itself on each new opened connection. Safepeak monitors those connections and decides if there is something to do with them or not. On a typical installation SafePeak likes all application and users connections to go via SafePeak – this way it knows about data and schema updates immediately (real time). With activation of the safepeak “logon trigger”  a special CLR trigger is deployed on the SQL server and notifies Safepeak on any connection that has not arrived via SafePeak. In such cases Safepeak can act to clear and lock the cache or to ignore it. This feature enables to make sure SafePeak will be aware of all connections so SafePeak cache will maintain exactly correct all times. So even if a user, like a DBA will connect to the SQL Server not via SafePeak, SafePeak will know about it and take actions. The notification does not impact the work of that connection, the user or application still continue to do whatever they planned to do. Note: I found that activation of logon trigger in SafePeak requires that SafePeak SQL login will have the next permissions: 1) CONTROL SERVER; 2) VIEW SERVER STATE; 3) And the SQL Server instance is CLR enabled; Seeing SafePeak in action, I can say SafePeak brings fantastic resource for those who seek to get performance for SQL Server critical apps. SafePeak promises to accelerate SQL Server applications in just several hours of installation, automatic learning and some optimization configuration (no code changes!!!). If better application and database performance means better business to you – I suggest you to download and try SafePeak. The solution of SafePeak is indeed unique, and the questions I receive are very interesting. Have any more questions on SafePeak? Please leave your question as a comment and I will try to get an answer for you. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Fast Data: Go Big. Go Fast.

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 For those of you who may have missed it, today’s second full day of Oracle OpenWorld 2012 started with a rumpus. Joe Tucci, from EMC outlined the human face of big data with real examples of how big data is transforming our world. And no not the usual tried-and-true weblog examples, but real stories about taxi cab drivers in Singapore using big data to better optimize their routes as well as folks just trying to get a better hair cut. Next we heard from Thomas Kurian who talked at length about the important platform characteristics of Oracle’s Cloud and more specifically Oracle’s expanded Cloud Services portfolio. Especially interesting to our integration customers are the messaging support for Oracle’s Cloud applications. What this means is that now Oracle’s Cloud applications have a lightweight integration fabric that on-premise applications can communicate to it via REST-APIs using Oracle SOA Suite. It’s an important element to our strategy at Oracle that supports this idea that whether your requirements are for private or public, Oracle has a solution in the Cloud for all of your applications and we give you more deployment choice than any vendor. If this wasn’t enough to get the juices flowing, later that morning we heard from Hasan Rizvi who outlined in his Fusion Middleware session the four most important enterprise imperatives: Social, Mobile, Cloud, and a brand new one: Fast Data. Today, Rizvi made an important step in the definition of this term to explain that he believes it’s a convergence of four essential technology elements: Event Processing for event filtering, business rules – with Oracle Event Processing Data Transformation and Loading - with Oracle Data Integrator Real-time replication and integration – with Oracle GoldenGate Analytics and data discovery – with Oracle Business Intelligence Each of these four elements can be considered (and architect-ed) together on a single integrated platform that can help customers integrate any type of data (structured, semi-structured) leveraging new styles of big data technologies (MapReduce, HDFS, Hive, NoSQL) to process more volume and variety of data at a faster velocity with greater results.  Fast data processing (and especially real-time) has always been our credo at Oracle with each one of these products in Fusion Middleware. For example, Oracle GoldenGate continues to be made even faster with the recent 11g R2 Release of Oracle GoldenGate which gives us some even greater optimization to Oracle Database with Integrated Capture, as well as some new heterogeneity capabilities. With Oracle Data Integrator with Big Data Connectors, we’re seeing much improved performance by running MapReduce transformations natively on Hadoop systems. And with Oracle Event Processing we’re seeing some remarkable performance with customers like NTT Docomo. Check out their upcoming session at Oracle OpenWorld on Wednesday to hear more how this customer is using Event processing and Big Data together. If you missed any of these sessions and keynotes, not to worry. There's on-demand versions available on the Oracle OpenWorld website. You can also checkout our upcoming webcast where we will outline some of these new breakthroughs in Data Integration technologies for Big Data, Cloud, and Real-time in more details. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • SQL SERVER – Installing SQL Server Data Tools and SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. If you have installed SQL Server, but are missing the Data Tools or Reporting Services Double-click the SQL Server 2012 installation media. Click the Installation link on the left to view the Installation options. Click the top link New SQL Server stand-alone installation or add features to an existing installation. Follow the SQL Server Setup wizard until you get to the Installation Type screen. At that screen, select Add features to an existing instance of SQL Server 2012. Click Next to move to the Feature Selection page. Select Reporting Services – Native and SQL Server Data Tools. If the Management Tools have not been installed, go ahead and choose them as well. Continue through the wizard and reboot the computer at the end of the installation if instructed to do so. Configure Reporting Services If you installed Reporting Services during the installation of the SQL Server instance, SSRS will be configured automatically for you. If you install SSRS later, then you will have to go back and configure it as a subsequent step. Click Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools > Reporting Services Configuration Manager > Connect on the Reporting Services Configuration Connection dialog box. On the left-hand side of the Reporting Services Configuration Manager, click Database. Click the Change Database button on the right side of the screen. Select Create a new report server database and click Next. Click through the rest of the wizard accepting the defaults. This wizard creates two databases: ReportServer, used to store report definitions and security, and ReportServerTempDB which is used as scratch space when preparing reports for user requests. Now click Web Service URL on the left-hand side of the Reporting Services Configuration Manager. Click the Apply button to accept the defaults. If the Apply button has been grayed out, move on to the next step. This step sets up the SSRS web service. The web service is the program that runs in the background that communicates between the web page, which you will set up next, and the databases. The final configuration step is to select the Report Manager URL link on the left. Accept the default settings and click Apply. If the Apply button was already grayed out, this means the SSRS was already configured. This step sets up the Report Manager web site where you will publish reports. You may be wondering if you also must install a web server on your computer. SQL Server does not require that the Internet Information Server (IIS), the Microsoft web server, be installed to run Report Manager. Click Exit to dismiss the Reporting Services Configuration Manager dialog box. Tomorrow’s Post Tomorrow’s blog post will show how to create your first report using the Report Wizard. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • How to Visualize your Audit Data with BI Publisher?

    - by kanichiro.nishida
      Do you know how many reports on your BI Publisher server are accessed yesterday ? Or, how many users accessed to the reports yesterday, or what are the average number of the users accessed to the reports during the week vs. weekend or morning vs. afternoon ? With BI Publisher 11G, now you can audit your user’s reports access and understand the state of the reporting environment at your server, each user, or each report level. At the previous post I’ve talked about what the BI Publisher’s auditing functionality and how to enable it so that BI Publisher can start collecting such data. (How to Audit and Monitor BI Publisher Reports Access?)Now, how can you visualize such auditing data to have a better understanding and gain more insights? With Fusion Middleware Audit Framework you have an option to store the auditing data into a database instead of a log file, which is the default option. Once you enable the database storage option, that means you have your auditing data (or, user report access data) in your database tables, now no brainer, you can start visualize the data, create reports, analyze, and share with BI Publisher. So, first, let’s take a look on how to enable the database storage option for the auditing data. How to Feed the Auditing Data into Database First you need to create a database schema for Fusion Middleware Audit Framework with RCU (Repository Creation Utility). If you have already installed BI Publisher 11G you should be familiar with this RCU. It creates any database schema necessary to run any Fusion Middleware products including BI stuff. And you can use the same RCU that you used for your BI or BI Publisher installation to create this Audit schema. Create Audit Schema with RCU Here are the steps: Go to $RCU_HOME/bin and execute the ‘rcu’ command Choose Create at the starting screen and click Next. Enter your database details and click Next. Choose the option to create a new prefix, for example ‘BIP’, ‘KAN’, etc. Select 'Audit Services' from the list of schemas. Click Next and accept the tablespace creation. Click Finish to start the process. After this, there should be following three Audit related schema created in your database. <prefix>_IAU (e.g. KAN_IAU) <prefix>_IAU_APPEND (e.g. KAN_IAU_APPEND) <prefix>_IAU_VIEWER (e.g. KAN_IAU_VIEWER) Setup Datasource at WebLogic After you create a database schema for your auditing data, now you need to create a JDBC connection on your WebLogic Server so the Audit Framework can access to the database schema that was created with the RCU with the previous step. Connect to the Oracle WebLogic Server administration console: http://hostname:port/console (e.g. http://report.oracle.com:7001/console) Under Services, click the Data Sources link. Click ‘Lock & Edit’ so that you can make changes Click New –> ‘Generic Datasource’ to create a new data source. Enter the following details for the new data source:  Name: Enter a name such as Audit Data Source-0.  JNDI Name: jdbc/AuditDB  Database Type: Oracle  Click Next and select ‘Oracle's Driver (Thin XA) Versions: 9.0.1 or later’ as Database Driver (if you’re using Oracle database), and click Next. The Connection Properties page appears. Enter the following information: Database Name: Enter the name of the database (SID) to which you will connect. Host Name: Enter the hostname of the database.  Port: Enter the database port.  Database User Name: This is the name of the audit schema that you created in RCU. The suffix is always IAU for the audit schema. For example, if you gave the prefix as ‘BIP’, then the schema name would be ‘KAN_IAU’.  Password: This is the password for the audit schema that you created in RCU.   Click Next. Accept the defaults, and click Test Configuration to verify the connection. Click Next Check listed servers where you want to make this JDBC connection available. Click ‘Finish’ ! After that, make sure you click ‘Activate Changes’ at the left hand side top to take the new JDBC connection in effect. Register your Audit Data Storing Database to your Domain Finally, you can register the JNDI/JDBC datasource as your Auditing data storage with Fusion Middleware Control (EM). Here are the steps: 1. Login to Fusion Middleware Control 2. Navigate to Weblogic Domain, right click on ‘bifoundation…..’, select Security, then Audit Store. 3. Click the searchlight icon next to the Datasource JNDI Name field. 4.Select the Audit JNDI/JDBC datasource you created in the previous step in the pop-up window and click OK. 5. Click Apply to continue. 6. Restart the whole WebLogic Servers in the domain. After this, now the BI Publisher should start feeding all the auditing data into the database table called ‘IAU_BASE’. Try login to BI Publisher and open a couple of reports, you should see the activity audited in the ‘IAU_BASE’ table. If not working, you might want to check the log file, which is located at $BI_HOME/user_projects/domains/bifoundation_domain/servers/AdminServer/logs/AdminServer-diagnostic.log to see if there is any error. Once you have the data in the database table, now, it’s time to visualize with BI Publisher reports! Create a First BI Publisher Auditing Report Register Auditing Datasource as JNDI datasource First thing you need to do is to register the audit datasource (JNDI/JDBC connection) you created in the previous step as JNDI data source at BI Publisher. It is a JDBC connection registered as JNDI, that means you don’t need to create a new JDBC connection by typing the connection URL, username/password, etc. You can just register it using the JNDI name. (e.g. jdbc/AuditDB) Login to BI Publisher as Administrator (e.g. weblogic) Go to Administration Page Click ‘JNDI Connection’ under Data Sources and Click ‘New’ Type Data Source Name and JNDI Name. The JNDI Name is the one you created in the WebLogic Console as the auditing datasource. (e.g. jdbc/AuditDB) Click ‘Test Connection’ to make sure the datasource connection works. Provide appropriate roles so that the report developers or viewers can share this data source to view reports. Click ‘Apply’ to save. Create Data Model Select Data Model from the tool bar menu ‘New’ Set ‘Default Data Source’ to the audit JNDI data source you have created in the previous step. Select ‘SQL Query’ for your data set Use Query Builder to build a query or just type a sql query. Either way, the table you want to report against is ‘IAU_BASE’. This IAU_BASE table contains all the auditing data for other products running on the WebLogic Server such as JPS, OID, etc. So, if you care only specific to BI Publisher then you want to filter by using  ‘IAU_COMPONENTTYPE’ column which contains the product name (e.g. ’xmlpserver’ for BI Publisher). Here is my sample sql query. select     "IAU_BASE"."IAU_COMPONENTTYPE" as "IAU_COMPONENTTYPE",      "IAU_BASE"."IAU_EVENTTYPE" as "IAU_EVENTTYPE",      "IAU_BASE"."IAU_EVENTCATEGORY" as "IAU_EVENTCATEGORY",      "IAU_BASE"."IAU_TSTZORIGINATING" as "IAU_TSTZORIGINATING",    to_char("IAU_TSTZORIGINATING", 'YYYY-MM-DD') IAU_DATE,    to_char("IAU_TSTZORIGINATING", 'DAY') as IAU_DAY,    to_char("IAU_TSTZORIGINATING", 'HH24') as IAU_HH24,    to_char("IAU_TSTZORIGINATING", 'WW') as IAU_WEEK_OF_YEAR,      "IAU_BASE"."IAU_INITIATOR" as "IAU_INITIATOR",      "IAU_BASE"."IAU_RESOURCE" as "IAU_RESOURCE",      "IAU_BASE"."IAU_TARGET" as "IAU_TARGET",      "IAU_BASE"."IAU_MESSAGETEXT" as "IAU_MESSAGETEXT",      "IAU_BASE"."IAU_FAILURECODE" as "IAU_FAILURECODE",      "IAU_BASE"."IAU_REMOTEIP" as "IAU_REMOTEIP" from    "KAN3_IAU"."IAU_BASE" "IAU_BASE" where "IAU_BASE"."IAU_COMPONENTTYPE" = 'xmlpserver' Once you saved a sample XML for this data model, now you can create a report with this data model. Create Report Now you can use one of the BI Publisher’s layout options to design the report layout and visualize the auditing data. I’m a big fan of Online Layout Editor, it’s just so easy and simple to create reports, and on top of that, all the reports created with Online Layout Editor has the Interactive View with automatic data linking and filtering feature without any setting or coding. If you haven’t checked the Interactive View or Online Layout Editor you might want to check these previous blog posts. (Interactive Reporting with BI Publisher 11G, Interactive Master Detail Report Just A Few Clicks Away!) But of course, you can use other layout design option such as RTF template. Here are some sample screenshots of my report design with Online Layout Editor.     Visualize and Gain More Insights about your Customers (Users) ! Now you can visualize your auditing data to have better understanding and gain more insights about your reporting environment you manage. It’s been actually helping me personally to answer the  questios like below.  How many reports are accessed or opened yesterday, today, last week ? Who is accessing which report at what time ? What are the time windows when the most of the reports access happening ? What are the most viewed reports ? Who are the active users ? What are the # of reports access or user access trend for the last month, last 6 months, last 12 months, etc ? I was talking with one of the best concierge in the world at this hotel the other day, and he was telling me that the best concierge knows about their customers inside-out therefore they can provide a very private service that is customized to each customer to meet each customer’s specific needs. Well, this is true when it comes to how to administrate and manage your reporting environment, right ? The best way to serve your customers (report users, including both viewers and developers) is to understand how they use, what they use, when they use. Auditing is not just about compliance, but it’s the way to improve the customer service. The BI Publisher 11G Auditing feature enables just that to help you understand your customers better. Happy customer service, be the best reporting concierge! p.s. please share with us on what other information would be helpful for you for the auditing! Always, any feedback is a great value and inspiration for us!  

    Read the article

  • You Probably Already Have a “Private Cloud”

    - by BuckWoody
    I’ve mentioned before that I’m not a fan of the word “Cloud”. It’s too marketing-oriented, gimmicky and non-specific. A better definition (in many cases) is “Distributed Computing”. That means that some or all of the computing functions are handled somewhere other than under your specific control. But there is a current use of the word “Cloud” that does not necessarily mean that the computing is done somewhere else. In fact, it’s a vector of Cloud Computing that can better be termed “Utility Computing”. This has to do with the provisioning of a computing resource. That means the setup, configuration, management, balancing and so on that is needed so that a user – which might actually be a developer – can do some computing work. To that person, the resource is just “there” and works like they expect, like the phone system or any other utility. The interesting thing is, you can do this yourself. In fact, you probably already have been, or are now. It’s got a cool new trendy term – “Private Cloud”, but the fact is, if you have your setup automated, the HA and DR handled, balancing and performance tuning done, and a process wrapped around it all, you can call yourself a “Cloud Provider”. A good example here is your E-Mail system. your users – pretty much your whole company – just logs into e-mail and expects it to work. To them, you are the “Cloud” provider. On your side, the more you automate and provision the system, the more you act like a Cloud Provider. Another example is a database server. In this case, the “end user” is usually the development team, or perhaps your SharePoint group and so on. The data professionals configure, monitor, tune and balance the system all the time. The more this is automated, the more you’re acting like a Cloud Provider. Lots of companies help you do this in your own data centers, from VMWare to IBM and many others. Microsoft's offering in this is based around System Center – they have a “cloud in a box” provisioning system that’s actually pretty slick. The most difficult part of operating a Private Cloud is probably the scale factor. In the case of Windows and SQL Azure, we handle this in multiple ways – and we're happy to share how we do it. It’s not magic, and the algorithms for balancing (like the one we started with called Paxos) are well known. The key is the knowledge, infrastructure and people. Sure, you can do this yourself, and in many cases such as top-secret or private systems, you probably should. But there are times where you should evaluate using Azure or other vendors, or even multiple vendors to spread your risk. All of this should be based on client need, not on what you know how to do already. So congrats on your new role as a “Cloud Provider”. If you have an E-mail system or a database platform, you can just put that right on your resume.

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >