Search Results

Search found 23613 results on 945 pages for 'query parameters'.

Page 820/945 | < Previous Page | 816 817 818 819 820 821 822 823 824 825 826 827  | Next Page >

  • OData &ndash; The easiest service I can create

    - by Jon Dalberg
    I wanted to create an OData service with the least amount of code so I fired up Visual Studio and got cracking. I decided to serve up a list of naughty words and make them read-only. Create a new web project. I created an empty MVC 2 application but MVC is not required for OData. Add a new WCF Data Service to the project. I named mine NastyWords.svc since I’m serving up a list of nasty words. Add a class to expose via the service: NastyWord 1: [DataServiceKey("Word")] 2: public class NastyWord 3: { 4: public string Word { get; set; } 5: }   I need to be able to uniquely identify instances of NastyWords for the DataService so I used the DataServiceKey attribute with the “Word” property as the key. I could have added an “ID” property which would have uniquely identified them and would then not need the “DataServiceKey” attribute because the DataService would apply some reflection and heuristics to guess at which property would be the unique identifier. However, the words themselves are unique so adding an “ID” property would be redundantly repetitive. Then I created a data source to expose my NastyWord objects to the service. This is just a simple class with IQueryable<T> properties exposing the entities for my service: 1: public class NastyWordsDataSource 2: { 3: private static IList<NastyWord> words = new List<NastyWord> 4: { 5: new NastyWord{ Word="crap"}, 6: new NastyWord{ Word="darn"}, 7: new NastyWord{ Word="hell"}, 8: new NastyWord{ Word="shucks"} 9: }; 10:   11: public NastyWordsDataSource() 12: { 13: NastyWords = words.AsQueryable(); 14: } 15:   16: public IQueryable<NastyWord> NastyWords { get; private set; } 17: }   Now I can go to the NastyWords.svc class and tell it which data source to use and which entities to expose: 1: public class NastyWords : DataService<NastyWordsDataSource> 2: { 3: // This method is called only once to initialize service-wide policies. 4: public static void InitializeService(DataServiceConfiguration config) 5: { 6: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); 7: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 8: } 9: }   Compile and browse to my NastWords.svc and weep with joy Now I can query my service just like any other OData service. Next time, I’ll modify this service to allow updates to sent so I can build up my list of nasty words. Enjoy!

    Read the article

  • NHibernate Pitfalls: Custom Types and Detecting Changes

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. NHibernate supports the declaration of properties of user-defined types, that is, not entities, collections or primitive types. These are used for mapping a database columns, of any type, into a different type, which may not even be an entity; think, for example, of a custom user type that converts a BLOB column into an Image. User types must implement interface NHibernate.UserTypes.IUserType. This interface specifies an Equals method that is used for comparing two instances of the user type. If this method returns false, the entity is marked as dirty, and, when the session is flushed, will trigger an UPDATE. So, in your custom user type, you must implement this carefully so that it is not mistakenly considered changed. For example, you can cache the original column value inside of it, and compare it with the one in the other instance. Let’s see an example implementation of a custom user type that converts a Byte[] from a BLOB column into an Image: 1: [Serializable] 2: public sealed class ImageUserType : IUserType 3: { 4: private Byte[] data = null; 5: 6: public ImageUserType() 7: { 8: this.ImageFormat = ImageFormat.Png; 9: } 10: 11: public ImageFormat ImageFormat 12: { 13: get; 14: set; 15: } 16: 17: public Boolean IsMutable 18: { 19: get 20: { 21: return (true); 22: } 23: } 24: 25: public Object Assemble(Object cached, Object owner) 26: { 27: return (cached); 28: } 29: 30: public Object DeepCopy(Object value) 31: { 32: return (value); 33: } 34: 35: public Object Disassemble(Object value) 36: { 37: return (value); 38: } 39: 40: public new Boolean Equals(Object x, Object y) 41: { 42: return (Object.Equals(x, y)); 43: } 44: 45: public Int32 GetHashCode(Object x) 46: { 47: return ((x != null) ? x.GetHashCode() : 0); 48: } 49: 50: public override Int32 GetHashCode() 51: { 52: return ((this.data != null) ? this.data.GetHashCode() : 0); 53: } 54: 55: public override Boolean Equals(Object obj) 56: { 57: ImageUserType other = obj as ImageUserType; 58: 59: if (other == null) 60: { 61: return (false); 62: } 63: 64: if (Object.ReferenceEquals(this, other) == true) 65: { 66: return (true); 67: } 68: 69: return (this.data.SequenceEqual(other.data)); 70: } 71: 72: public Object NullSafeGet(IDataReader rs, String[] names, Object owner) 73: { 74: Int32 index = rs.GetOrdinal(names[0]); 75: Byte[] data = rs.GetValue(index) as Byte[]; 76: 77: this.data = data as Byte[]; 78: 79: if (data == null) 80: { 81: return (null); 82: } 83: 84: using (MemoryStream stream = new MemoryStream(this.data ?? new Byte[0])) 85: { 86: return (Image.FromStream(stream)); 87: } 88: } 89: 90: public void NullSafeSet(IDbCommand cmd, Object value, Int32 index) 91: { 92: if (value != null) 93: { 94: Image data = value as Image; 95: 96: using (MemoryStream stream = new MemoryStream()) 97: { 98: data.Save(stream, this.ImageFormat); 99: value = stream.ToArray(); 100: } 101: } 102: 103: (cmd.Parameters[index] as DbParameter).Value = value ?? DBNull.Value; 104: } 105: 106: public Object Replace(Object original, Object target, Object owner) 107: { 108: return (original); 109: } 110: 111: public Type ReturnedType 112: { 113: get 114: { 115: return (typeof(Image)); 116: } 117: } 118: 119: public SqlType[] SqlTypes 120: { 121: get 122: { 123: return (new SqlType[] { new SqlType(DbType.Binary) }); 124: } 125: } 126: } In this case, we need to cache the original Byte[] data because it’s not easy to compare two Image instances, unless, of course, they are the same.

    Read the article

  • Methodology to understanding JQuery plugin & API's developed by third parties

    - by Taoist
    I have a question about third party created JQuery plug ins and API's and the methodology for understanding them. Recently I downloaded the JQuery Masonry/Infinite scroll plug in and I couldn't figure out how to configure it based on the instructions. So I downloaded a fully developed demo, then manually deleted everything that wouldn't break the functionality. The code that was left allowed me to understand the plug in much greater detail than the documentation. I'm now having a similar issue with a plug in called JQuery knob. http://anthonyterrien.com/knob/ If you look at the JQuery Knob readme file it says this is working code: $(function() { $('.dial') .trigger( 'configure', { "min":10, "max":40, "fgColor":"#FF0000", "skin":"tron", "cursor":true } ); }); But as far as I can tell it isn't at all. The read me also says the Plug in uses Canvas. I am wondering if I am suppose to wrap this code in a canvas context or if this functionality is already part of the plug in. I know this kind of "question" might not fit in here but I'm a bit confused on the assumptions around reading these kinds of documentation and thought I would post the query regardless. Curious to see if this is due to my "newbi" programming experience or if this is something seasoned coders also fight with. Thank you. Edit In response to Tyanna's reply. I modified the code and it still doesn't work. I posted it below. I made sure that I checked the Google Console to insure the basics were taken care of, such as not getting a read-error on the library. <!DOCTYPE html> <meta charset="UTF-8"> <title>knob</title> <link rel="stylesheet" href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/themes/hot-sneaks/jquery-ui.css" type="text/css" /> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.js" charset="utf-8"></script> <script src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.21/jquery-ui.min.js"></script> <script src="js/jquery.knob.js"></script> <div id="button1">test </div> <script> $(function() { $("#button1").click(function () { $('.dial').trigger( 'configure', { "min":10, "max":40, "fgColor":"#FF0000", "skin":"tron", "cursor":true } ); }); }); </script>

    Read the article

  • Designing a completly new database/gui solution for my compnay

    - by user1277304
    I'm no expert when it come to Everything Visual Studio 2010 and utilizing SQL server 2008. I'm sure some of my personal projects I've built for personal use would get laughed off the face of the planet, but SQLCe has been the solution I was looking for those home type of projects. And they work, flawlessly. Now I feel it's time to step up to the big league. I want to develop a complete, unified and module based solution for my compnay that I'm working for. We're still using stuff from the 80s for goodness sake! I use Excel and query the ancient database on my own because I can't stand the GUI. Nothing against people of age, but the IDE our programmers are using is from the stone age, and they use APL of all things with it. I've yet to see a radio buttton control anywhere in the GUI where it would make sense. Anyway, I want to do this right from the ground up. I'm by no means a newbie when it comes to programming in .NET 2010, however, I want the entire solution to be professionaly done. I want version control, test projects, project flow, SQL 2008 integration and all the bells and whistles that come with that. I know for a fact that if we had something like that runnning, not only would development costs and time be slashed four fold, but the possibilities for expansion and performance would sky rocket. (Between the GUI an our DB engine, it can only use ONE CORE! ONE! It's 2012 for goodness sake!) Our buisness is growing and our current ancient solution just can't keep up, and I'd hate to see our buisness go down in flames because our programmer is stuck in the 80's and refuses to use anything current. So I ask you guys, the experts and know-it-alls, where do I start? Are there any gems of good books out there in the haystack of all "This for dummies" type of deals? I already have several people backing me in this endevour, and while it may seem brash to just usurp the current programmers, I'm doing this for the company as a whole. Thank you guys for your time.

    Read the article

  • About the K computer

    - by nospam(at)example.com (Joerg Moellenkamp)
    Okay ? after getting yet another mail because of the new #1 on the Top500 list, I want to add some comments from my side: Yes, the system is using SPARC processor. And that is great news for a SPARC fan like me. It is using the SPARC VIIIfx processor from Fujitsu clocked at 2 GHz. No, it isn't the only one. Most people are saying there are two in the Top500 list using SPARC (#77 JAXA and #1 K) but in fact there are three. The Tianhe-1 (#2 on the Top500 list) super computer contains 2048 Galaxy "FT-1000" 1 GHz 8-core processors. Don't know it? The FeiTeng-1000 ? this proc is a 8 core, 8 threads per core, 1 ghz processor made in China. And it's SPARC based. By the way ? this sounds really familiar to me ? perhaps the people just took the opensourced UltraSPARC-T2 design, because some of the parameters sound just to similar. However it looks like that Tianhe-1 is using the SPARCs as input nodes and not as compute notes. No, I don't see it as the next M-series processor. Simple reason: You can't create SMP systems out of them ? it simply hasn't the functionality to do so. Even when there are multiple CPUs on a single board, they are not connected like an SMP/NUMA machine to a shared memory machine ? they are connected with the cluster interconnect (in this case the Tofu interconnect) and work like a large cluster. Yes, it has a lot of oomph in Linpack ? however I assume a lot came from the extensions to the SPARCv9 standard. No, Linpack has no relevance for any commercial workload ? Linpack is such a special load, that even some HPC people are arguing that it isn't really a good benchmark for HPC. It's embarrassingly parallel, it can work with relatively small interconnects compared to the interconnects in SMP systems (however we get in spheres SMP interconnects where a few years ago). Amdahl isn't hitting that hard when running Linpack. Yes, it's a good move to use SPARC. At some time in the last 10 years, there was an interesting twist in perception: SPARC was considered as proprietary architecture and x86 was the open architecture. However it's vice versa ? try to create a x86 clone and you have a lot of intellectual property problems, create a SPARC clone and you have to spend 100 bucks or so to get the specification from the SPARC Foundation and develop your own SPARC processor. Fujitsu is doing this for a long time now. So they had their own processor, their own know-how. So why was SPARC a good choice? Well ? essentially Fujitsu can do what they want with their core as it is their core, for example adding the extensions to the SPARCv9 chipset ? getting Intel to create extensions to x86 to help you with your product is a little bit harder. So Fujitsu could do they needed to do with their processor in order to create such a supercomputer. No, the K is really using no FPGA or GPU as accelerators. The K is really using the CPU at doing this job. Yes, it has a significantly enhanced FPU capable to execute 8 instructions in parallel. No, it doesn't run Solaris. Yes, it uses Linux. No, it doesn't hurt me ... as my colleague Roland Rambau (he knows a lot about HPC) said once to me ... it doesn't matter which OS is staying out of the way of the workload in HPC.

    Read the article

  • Data Virtualization: Federated and Hybrid

    - by Krishnamoorthy
    Data becomes useful when it can be leveraged at the right time. Not only enterprises application stores operate on large volume, velocity and variety of data. Mobile and social computing are in the need of operating in foresaid data. Replicating and transferring large swaths of data is one challenge faced in the field of data integration. However, smaller chunks of data aggregated from a variety of sources presents and even more interesting challenge in the industry. Over the past few decades, technology trends focused on best user experience, operating systems, high performance computing, high performance web sites, analysis of warehouse data, service oriented architecture, social computing, cloud computing, and big data. Operating on the ‘dark data’ becomes mandatory in the future technology trend, although, no solution can make dark data useful data in a single day. Useful data can be quantified by the facts of contextual, personalized and on time delivery. In most cases, data from a single source may not be complete the picture. Data has to be combined and computed from various sources, where data may be captured as hybrid data, meaning the combination of structured and unstructured data. Since related data is often found across disparate sources, effectively integrating these sources determines how useful this data ultimately becomes. Technology trends in 2013 are expected to focus on big data and private cloud. Consumers are not merely interested in where data is located or how data is retrieved and computed. Consumers are interested in how quick and how the data can be leveraged. In many cases, data virtualization is the right solution, and is expected to play a foundational role for SOA, Cloud integration, and Big Data. The Oracle Data Integration portfolio includes a data virtualization product called ODSI (Oracle Data Service Integrator). Unlike other data virtualization solutions, ODSI can perform both read and write operations on federated/hybrid data (RDBMS, Webservices,  delimited file and XML). The ODSI Engine is built on XQuery, hence ODSI user can perform computations on data either using XQuery or SQL. Built in data and query caching features, which reduces latency in repetitive calls. Rightly positioning ODSI, can results in a highly scalable model, reducing spend on additional hardware infrastructure.

    Read the article

  • Some PowerShell goodness

    - by KyleBurns
    Ever work somewhere where processes dump files into folders to maintain an archive?  Me too and Windows Explorer hates it.  Very often I find myself needing to organize these files into subfolders so that I can go after files without locking up Windows Explorer and my answer used to be to write a program in something like C# to do the job.  These programs will typically enumerate the files in a folder and move each file to a subdirectory named based on a datestamp.  The last such program I wrote had to use lower-level Win32 API calls to perform the enumeration because it appears the standard .Net calls make use of the same method of enumerating the directories that Windows Explorer chokes on when dealing with a large number of entries in a particular directory, so a simple task was accomplished with a lot of code. Of course, this little utility was just something I used to make my life easier and "not a production app", so it was in my local source folder and not source control when my hard drive died.  So... I was getting ready to re-create it and thought it might be a good idea to play with PowerShell a bit - something I had been wanting to do but had not yet met a requirement to make me do it.  The resulting script was amazingly succinct and even building the flexibility for parameterization and adding line breaks for readability was only about 25 lines long.  Here's the code with discussion following: param(     [Parameter(         Mandatory = $false,         Position = 0,         HelpMessage = "Root of the folders or share to archive.  Be sure to end with appropriate path separator"     )]     [String] $folderRoot="\\fileServer\pathToFolderWithLotsOfFiles\",       [Parameter(         Mandatory = $false,         Position = 1     )]     [int] $days = 1 ) dir $folderRoot|?{(!($_.PsIsContainer)) -and ((get-date) - $_.lastwritetime).totaldays -gt $days }|%{     [string]$year=$([string]$_.lastwritetime.year)     [string]$month=$_.lastwritetime.month     [string]$day=$_.lastwritetime.day     $dir=$folderRoot+$year+"\"+$month+"\"+$day     if(!(test-path $dir)){         new-item -type container $dir     }     Write-output $_     move-item $_.fullname $dir } The script starts by declaring two parameters.  The first parameter holds the path to the folder that I am going to be sorting into subdirectories.  The path separator is intended to be included in this argument because I didn't want to mess with determining whether this was local or UNC and picking the right separator in code, but this could be easily improved upon using Path.Combine since PowerShell has access to the full framework libraries.  The second parameter holds a minimum age in days for files to be removed from the root folder.  The script then pipes the dir command through a query to include only files (by excluding containers) and of those, only entries that meet the age requirement based on the last modified datestamp.  For each of those, the datestamp is used to construct a folder name in the format YYYY\MM\DD (if you're in an environment where even a day's worth of files need further divided, you could make this more granular) and the folder is created if it does not yet exist.  Finally, the file is moved into the directory. One of the things that was really cool about using PowerShell for this task is that the new-item command is smart enough to create the entire subdirectory structure with a single call.  In previous code that I have written to do this kind of thing, I would have to test the entire tree leading down to the subfolder I want, leading to a lot of branching code that detracted from being able to quickly look at the code and understand the job it performs. Overall, I have to say I'm really pleased with what has been done making PowerShell powerful and useful.

    Read the article

  • Assessing Relative Maintainability

    - by João Bragança
    We (a contractor, actually) are implementing an off the shelf system to replace a legacy homegrown system for the core domain of the company (designing widgets). Unfortunately both systems will have to run concurrently for some time, as the product just isn't ready yet. Also, the decision was made to only migrate some of the widgets from the legacy system, based on date of last sale activity. Later on a new requirement came down: certain people in the company, most of them outside of the widget development context, want to search all widgets. The search results screen has 3 pieces of data: a GUID, a human readable id that is searchable, and a brief description (may need to be searchable in the future). In the widget details, there will be multiple screens. These screens align very well along SOA / bounded context lines - a screen for marketing data, a screen for sales history, etc. UML ahead! I am probably using the wrong kind of arrows here so please forgive me. The current solution - which is not in production yet - is something like the following: Both systems will be queried and the controller will merge the results. The new system has its own proprietary query language (we've alleviated this a bit with a LINQ provider). It also puts a lot of data on the wire. 15 search results typically run about 60k of unintelligible SOAP-wrapped xml. So I would prefer to avoid querying this system directly. These two systems publish events to help us integrate with other systems, mainly an ERP system. One of these events contains all the data necessary for the search screen. I proposed the following alternative: However I am being told that 'adding another database' will create more maintenance down the road. However, I believe this to be false, as I had to add a relatively simple feature that took several hours longer than anticipated because of this merging code. I want to get a feel for which system is more maintainable in the long run. I personally have not had the burden of maintaining any large system. I want something more than my gut. Specifically I'd like to know if having more, specialized physical databases is more or less maintainable than having less larger physical databases.

    Read the article

  • jQuery - discrepency between classname and selectors

    - by Ciel
    I have the following code that I wrote, which I personally found to be pretty nice. It takes a <ul> and it drops down the contents when clicked. But I am having a disconnect here in comprehension, and one I had to do what I feel is a 'dirty hack' to solve. The problem is that I do not want the class `'sidebar-dropdown-open' to be so 'hardwired' in the plugin. However I discovered that there is a very stark difference between... $('.sidebar-dropdown-open') and 'sidebar-dropdown-open and even '.sidebar-dropdown-open. I 'solved' this problem by including two different 'parameters' in my plugin, but I was wondering if someone might give me some insight as to how I could perform this better, and why this was behaving this way. wiring (document load) $(document).ready(function () { $('[data-role="sidebar-dropdown"]').drawer({ open: 'sidebar-dropdown-open', css: '.sidebar-dropdown-open' }); }); html <ul> <li class=" dropdown" data-role="sidebar-dropdown"> <a href="pages/.." class="remote">Link Text</a> <ul class="sub-menu light sidebar-dropdown-menu"> <li><a class="remote" href="pages/...">Link Text</a></li> <li><a class="remote" href="pages/...">Link Text</a></li> <li><a class="remote" href="pages/...">Link Text</a></li> </ul> </li> </ul> javascript (function ($) { $.fn.drawer = function (options) { // Create some defaults, extending them with any options that were provided var settings = $.extend({ open: 'open', css: '.open' }, options); return this.each(function () { $(this).on('click', function (e) { // slide up all open dropdown menus $(settings.css).not($(this)).each(function () { $(this).removeClass(settings.open); // retrieve the appropriate menu item var $menu = $(this).children(".dropdown-menu, .sidebar-dropdown-menu"); // slide down the one clicked on. $menu.slideUp('fast'); $menu.removeClass('active'); }); // mark this menu as open $(this).addClass(settings.open); // retrieve the appropriate menu item var $menu = $(this).children(".dropdown-menu, .sidebar-dropdown-menu"); // slide down the one clicked on. $menu.slideDown(100); $menu.addClass('active'); e.preventDefault(); e.stopPropagation(); }).on("mouseleave", function () { $(this).children(".dropdown-menu").hide().delay(300); }); }) }; })(jQuery); I have tried using settings.open and demanding that it just be a className (.open), etc. - but that does not seem to work. It seems to get ignored by the removeClass function.

    Read the article

  • ArchBeat Link-o-Rama for 10-24-2012

    - by Bob Rhubart
    Play Oracle Vanquisher Here's a little respite from whatever it is you normally spend your time on. Oracle Vanquisher is an online diversion that makes a game of data center optimization. According to the description: "Armed with a cool Oracle vacuum pack suit and a strategic IT roadmap, you will thwart threats and optimize your data center to increase your company’s stock price and boost your company's position." Mainly you avoid electric shock and killer birds. The current high score belongs to someone identified as "TEN." My score? Never mind. Book: DevOps for Developers | The Java Source The subject of DevOps has come up in a couple of recent OTN ArchBeat Podcasts, so it's somewhat serendipitous that Tori Weildt's recent blog post offers an overview of Java Champion Michael Hutterman's new book, DevOps for Developers, now available from Apress. Bring Your Own Device (BYOD) : Context is everything… | The ORACLE-BASE Blog BOYD is a factor in the evolution of IT, but in what context? "The real IT work in companies is still being done on PCs," says Oracle ACE Director Tim Hall. "Yes, you can use a cloud service on your phone, but look around the office and you will see those cloud services are actually being used by people on PCs." Oracle in the Cloud: Oracle EBusiness Suite sizing | Tom Laszewski Cloud expert Tom Laszewski shares several technical resources that will be helpful for sizing of Oracle EBusiness Suite. Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster Author and expert Yuli Vasiliev shows you how take advantage of multiple Oracle WebLogic Server instances grouped into a cluster to maximize scalability and availability. Webcast: Reduce Costs with Oracle's Database Storage Management Watch this! Join Oracle experts Kevin Jernigan and Margaret Hamburger for an interactive webcast in which you'll learn how Oracle's Database Storage Management can reduce storage costs and management complexity while improving query performance to meet service-level agreements and compliance requirements. Event Date: Tuesday, November 6, 2012 Event Time: 10 a.m. PT/1 p.m. ET Thought for the Day "Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves." — Alan Kay Source: softwarequotes.com

    Read the article

  • ClearTrace Performance on 170GB of Trace Files

    - by Bill Graziano
    I’ve always worked to make ClearTrace perform well.  That’s probably because I spend so much time watching it work.  I’m often going through two or three gigabytes of trace files but I rarely get the chance to run it on a really large set of files. One of my clients wanted to run a full trace for a week and then analyze the results.  At the end of that week we had 847 200MB trace files for a total of nearly 170GB. I regularly use 200MB trace files when I monitor production systems.  I usually get around 300,000 statements in a file that size if it’s mostly stored procedures.  So those 847 trace files contained roughly 250 million statements.  (That’s 730 bytes per statement if you’re keeping track.  Newer trace files have some compression in them but I’m not exactly sure what they’re doing.)  On a system running 1,000 statements per second I get a new file every five minutes or so. It took 27 hours to process these files on an older development box.  That works out to 1.77MB/second.  That means ClearTrace processed about 2,654 statements per second. You can query the data while you’re loading it but I’ve found it works better to use a second instance of ClearTrace to do this.  I’m not sure why yet but I think there’s still some dependency between the two processes. ClearTrace is almost always CPU bound.  It’s really just a huge, ugly collection of regular expressions.  It only writes a summary to its database at the end of each trace file so that usually isn’t a bottleneck.  At the end of this process, the executable was using roughly 435MB of RAM.  Certainly more than when it started but I think that’s acceptable. The database where all this is stored started out at 100MB.  After processing 170GB of trace files the database had grown to 203MB.  The space savings are due to the “datawarehouse-ish” design and only storing a summary of each trace file. You can download ClearTrace for SQL Server 2008 or test out the beta version for SQL Server 2012.  Happy Tuning!

    Read the article

  • Enabling Attachments in Oracle Forms

    - by Manoj Madhusoodanan
    In this blog I will demonstrate how we can enable attachments in Oracle Form. The user ask is to capture a job description which will be along text. Oracle Apps has a feature called attachments, through some easy setup we can enable attachments in forms. We will be going to addresscustomer ask through this feature. Step 1 : Identify Form Name First thing you have to do is to identify the form to enable attachment. In this scenario form will be PERWSDJT ( Job form in HRMS ). Step 2 :  Identify Document Entity exists or not. If exists then do nothing otherwise create new. Step 3 : Create Document Categories. Here I will create a category Job Description with Default Data Type Long Text and Application Human Resources.Onceit is done I will assign this to Create Job form. Step: 3 Query Attachment Functions for PERWSDJT. Click on Categories and add Job Description.Make sure enabled check box is checked. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Step 4 : Turn off HR:Use Standard Attachments profile option. Output Go to Work Structures > Job > Description

    Read the article

  • PASS Summit for SQL Starters

    - by Davide Mauri
    I’ve received a buch of emails from PASS Summit “First Timers” that are also somehow new to SQL Server (for “somehow” I mean people with less than 6 month experience but with some basic knowledge of SQL Server engine) or are catching up from SQL Server 2000. The common question regards the session one should not miss to have a broad view of the entire SQL Server platform have some insight into some specific areas of SQL Server Given that I’m on (semi-)vacantion and that I have more free time (not true, I have to prepare slides & demos for several conferences, PASS Summit  - Building the Agile Data Warehouse with SQL Server 2012 - and PASS 24H - Agile Data Warehousing with SQL Server 2012 - among them…but let’s pretend it to be true), I’ve decided to make a post to answer to this common questions. Of course this is my personal point of view and given the fact that the number and quality of session that will be delivered at PASS Summit is so high that is very difficoult to make a choice, fell free to jump into the discussion and leave your feedback or – even better – answer with another post. I’m sure it will be very helpful to all the SQL Server beginners out there. I’ve imposed to myself to choose 6 session at maximum for each Track. Why 6? Because it’s the maximum number of session you can follow in one day, and given that all the session will be on the Summit DVD, they are the answer to the following question: “If I have one day to spend in training, which session I should watch?”. Of course a Summit is not like a Course so a lot of very basics concept of well-established technologies won’t be found here. Analysis Services, Integration Services, MDX are not part of the Summit this time (at least for the basic part of them). Enough with that, let’s start with the session list ideal to have a good Overview of all the SQL Server Platform: Geospatial Data Types in SQL Server 2012 Inside Unstructured Data: SQL Server 2012 FileTable and Semantic Search XQuery and XML in SQL Server: Common Problems and Best Practice Solutions Microsoft's Big Play for Big Data Dashboards: When to Choose Which MSBI Tool Microsoft BI End-User Tools 360° for what concern Database Development, I recommend the following sessions Understanding Transaction Isolation Levels What to Look for in Execution Plans Improve Query Performance by Fixing Bad Parameter Sniffing A Window into Your Data: Using SQL Window Functions Practical Uses and Optimization of New T-SQL Features in SQL Server 2012 Taking MERGE Beyond the Basics For Business Intelligence Information Delivery Analyzing SSAS Data with Excel Building Compelling Power View Reports Managed Self-Service BI PowerPivot 101  SharePoint for Business Intelligence The Best Microsoft BI Tools You've Never Heard Of and for Business Intelligence Architecture & Development BI Power Hour Building a Tabular Model Database Enterprise Information Management: Bringing Together SSIS, DQS, and MDS SSIS Design Patterns Storing Columnstore Indexes Hadoop and Its Ecosystem Components in Action Beside the listed sessions, First Timers should also take a look the the page PASS set up for them: http://www.sqlpass.org/summit/2012/Connect/FirstTimers.aspx See you at PASS Summit!

    Read the article

  • Violation of the DRY Principle

    - by Onorio Catenacci
    I am sure there's a name for this anti-pattern somewhere; however I am not familiar enough with the anti-pattern literature to know it. Consider the following scenario: or0 is a member function in a class. For better or worse, it's heavily dependent on class member variables. Programmer A comes along and needs functionality like or0 but rather than calling or0, Programmer A copies and renames the entire class. I'm guessing that she doesn't call or0 because, as I say, it's heavily dependent on member variables for its functionality. Or maybe she's a junior programmer and doesn't know how to call it from other code. So now we've got or0 and c0 (c for copy). I can't completely fault Programmer A for this approach--we all get under tight deadlines and we hack code to get work done. Several programmers maintain or0 so it's now version orN. c0 is now version cN. Unfortunately most of the programmers that maintained the class containing or0 seemed to be completely unaware of c0--which is one of the strongest arguments I can think of for the wisdom of the DRY principle. And there may also have been independent maintainance of the code in c. Either way it appears that or0 and c0 were maintained independent of each other. And, joy and happiness, an error is occurring in cN that does not occur in orN. So I have a few questions: 1.) Is there a name for this anti-pattern? I've seen this happen so often I'd find it hard to believe this is not a named anti-pattern. 2.) I can see a few alternatives: a.) Fix orN to take a parameter that specifies the values of all the member variables it needs. Then modify cN to call orN with all of the needed parameters passed in. b.) Try to manually port fixes from orN to cN. (Mind you I don't want to do this but it is a realistic possibility.) c.) Recopy orN to cN--again, yuck but I list it for sake of completeness. d.) Try to figure out where cN is broken and then repair it independently of orN. Alternative a seems like the best fix in the long term but I doubt the customer will let me implement it. Never time or money to fix things right but always time and money to repair the same problem 40 or 50 times, right? Can anyone suggest other approaches I may not have considered? If you were in my place, which approach would you take? If there are other questions and answers here along these lines, please post links to them. I don't mind removing this question if it's a dupe but my searching hasn't turned up anything that addresses this question yet. EDIT: Thanks everyone for all the thoughtful responses. I asked about a name for the anti-pattern so I could research it further on my own. I'm surprised this particular bad coding practice doesn't seem to have a "canonical" name for it.

    Read the article

  • sony vaio WLAN problem using 12.04

    - by Fredrik
    I'm unable to get my WLAN to work for my Sony Vaio model VPCF23C5E No problem connecting from windows, smartphones etc. $ sudo lshw -C network; lsb_release -a; uname -a; sudo rfkill list; dmesg | grep -i firm *-network description: Wireless interface product: AR9485 Wireless Network Adapter vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 01 serial: 64:27:37:92:99:0f width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-29-generic firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:16 memory:f7000000-f707ffff memory:f7080000-f708ffff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 06 serial: f0:bf:97:dd:b2:bd size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-1.fw ip=192.168.1.4 latency=0 link=yes multicast=yes port=MII speed=1Gbit/s resources: irq:50 ioport:9000(size=256) memory:e2104000-e2104fff memory:e2100000-e2103fff LSB Version: core-2.0-amd64:core-2.0-noarch:core-3.0-amd64:core-3.0-noarch:core-3.1-amd64:core-3.1-noarch:core-3.2-amd64:core-3.2-noarch:core-4.0-amd64:core-4.0-noarch Distributor ID: Ubuntu Description: Ubuntu 12.04.1 LTS Release: 12.04 Codename: precise Linux siriedit 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no 2: hci0: Bluetooth Soft blocked: no Hard blocked: no [ 1.287449] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored [ 18.273582] [Firmware Bug]: ACPI(NGFX) defines _DOD but not _DOS Seen some proposed solution e.g. Wireless network cannot be enabled for Sony VAIO E series but answers there don't solve my problem... I'm out of ideas :-( What else can I check? update $ ifconfig -a eth0 Link encap:Ethernet HWaddr f0:bf:97:dd:b2:bd inet addr:192.168.1.4 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::f2bf:97ff:fedd:b2bd/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5072 errors:0 dropped:0 overruns:0 frame:0 TX packets:4444 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4568435 (4.5 MB) TX bytes:610624 (610.6 KB) Interrupt:50 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1498 errors:0 dropped:0 overruns:0 frame:0 TX packets:1498 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:114156 (114.1 KB) TX bytes:114156 (114.1 KB) wlan0 Link encap:Ethernet HWaddr 64:27:37:92:99:0f inet addr:192.168.1.5 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::6627:37ff:fe92:990f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1277 errors:0 dropped:0 overruns:0 frame:0 TX packets:472 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:483155 (483.1 KB) TX bytes:61031 (61.0 KB)

    Read the article

  • Configuring trace file size and number in WebCenter Content 11g

    - by Kyle Hatlestad
    Lately I've been doing a lot of debugging using the System Output tracing in WebCenter Content 11g.  This is built-in tracing in the content server which provides a great level of detail on what's happening under the hood.  You can access the settings as well as a view of the tracing by going to Administration -> System Audit Information.  From here, you can select the tracing sections to include.  Some of my personal favorites are searchquery,  systemdatabase, userstorage, and indexer.  Usually I'm trying to find out some information regarding a search, database query, or user information.  Besides debugging, it's also very helpful for performance tuning. One of the nice tricks with the tracing is it honors the wildcard (*) character.  So you can put in 'schema*' and gather all of the schema related tracing.  And you can notice if you select 'all' and update, it changes to just a *.   To view the tracing in real-time, you simply go to the 'View Server Output' page and the latest tracing information will be at the bottom. This works well if you're looking at something pretty discrete and the system isn't getting much activity.  But if you've got a lot of tracing going on, it would be better to go after the trace log file itself.  By default, the log files can be found in the <content server instance directory>/data/trace directory. You'll see it named 'idccs_<managed server name>_current.log.  You may also find previous trace logs that have rolled over.  In this case they will identified by a date/time stamp in the name.  By default, the server will rotate the logs after they reach 1MB in size.  And it will keep the most recent 10 logs before they roll off and get deleted.  If your server is in a cluster, then the trace file should be configured to be local to the node per the recommended configuration settings. If you're doing some extensive tracing and need to capture all of the information, there are a couple of configuration flags you can set to control the logs. #Change log size to 10MB and number of logs to 20FileSizeLimit=10485760FileCountLimit=20 This is set by going to Admin Server -> General Configuration and entering them in the Additional Configuration Variables: section.  Restart the server and it should take on the new logging settings. 

    Read the article

  • Framework for Everything - Where to begin? [Longer post]

    - by SquaredSoft
    Back story of this question, feel free to skip down for the specific question Hello, I've been very interested in the idea of abstract programming the last few years. I've made about 30 attempts at creating a piece of software that is capable of almost anything you throw at it. I've undertook some attempts at this that have taken upwards of a year, while getting close, never releasing it beyond my compiler. This has been something I've always tried wrapping my head around, and something is always missing. With the title, I'm sure you're assuming, "Yes of course you noob! You can't account for everything!" To which I have to reply, "Why not?" To give you some background into what I'm talking about, this all started with doing maybe a shade of gray hat SEO software. I found myself constantly having to create similar, but slightly different sets of code. I've gone through as many iterations of way to communicate on http as the universe has particles. "How many times am I going to have to write this multi-threaded class?" is something I found myself asking a lot. Sure, I could create a class library, and just work with that, but I always felt I could optimize what I had, which often was a large undertaking and typically involved frequent use of the CRTL+A keyboard shortcut, mixed with the delete button. It dawned on me that it was time to invest in a plugin system. This would allow me to simply add snippets of code. as time went on, and I could subversion stuff out, and distribute small chunks of code, rather than something that encompasses only a specific function or design. This comes with its own complexity, of course, and by the time I had finished the software scope for this addition, it hit me that I would want to add to everything in the software, not just a new http method, or automation code for a specific website. Great, we're getting more abstract. However, the software that I have in my mind comes down to a quite a few questions regarding its execution. I have to have some parameters to what I am going to do. After writing what the perfect software would do in my mind, I came up with this as a list of requirements: Should be able to use networking A "Macro" or "Expression system" which would allow people to do something like : =First(=ParseToList(=GetUrl("http://www.google.com?q=helloworld!"), Template.Google)) Multithreaded Able to add UI elements through some type of XML -- People can make their own addons etc. Can use third party API through the plugins, such as Microsoft CRM, Exchange, etc. This would allow the software to essentially be used for everything. Really, any task you wish to automate, in a simple way. Making the UI was as also extremely hard. How do you do all of this? Its very difficult. So my question: With so many attempts at this, I'm out of ideas how to successfully complete this. I have a very specific idea in my mind, but I keep failing to execute it. I'm a self taught programmer. I've been doing it for years, and work professionally in it, but I've never encountered something that would be as complex and in-depth as a system which essentially does everything. Where would you start? What are the best practices for design? How can I avoid constantly having to go back and optimize my software. What can I do to generalize this and draw everything out to completion. These are things I struggle with. P.s., I'm using c# as my main language. I feel like in this example, I might be hitting the outer limit of the language, although, I don't know if that is the case, or if I'm just a bad programmer. Thanks for your time.

    Read the article

  • ADO and Two Way Storage Tiering

    - by Andy-Oracle
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We get asked the following question about Automatic Data Optimization (ADO) storage tiering quite a bit. Can you tier back to the original location if the data gets hot again? The answer is yes but not with standard Automatic Data Optimization policies, at least not reliably. That's not how ADO is meant to operate. ADO is meant to mirror a traditional view of Information Lifecycle Management (ILM) where data will be very volatile when first created, will become less active or cool, and then will eventually cease to be accessed at all (i.e. cold). I think the reason this question gets asked is because customers realize that many of their business processes are cyclical and the thinking goes that those segments that only get used during month end or year-end cycles could sit on lower cost storage when not being used. Unfortunately this doesn't fit very well with the ADO storage tiering model. ADO storage tiering is based on the amount of free and used space in the source tablespace. There are two parameters that control this behavior, TBS_PERCENT_USED and TBS_PERCENT_FREE. When the space in the tablespace exceeds the TBS_PERCENT_USED value then segments specified in storage tiering clause(s) can be moved until the percent of free space reaches the TBS_PERCENT_FREE value. It is worth mentioning that no checks are made for available space in the target tablespace. Now, it is certainly possible to create custom functions to control storage tiering, but this can get complicated. The biggest problem is insuring that there is enough space to move the segment back to tier 1 storage, assuming that that's the goal. This isn't as much of a problem when moving from tier 1 to tier 2 storage because there is typically more tier 2 storage available. At least that's the premise since it is supposed to be less costly, lower performing and higher capacity storage. In either case though, if there isn't enough space then the operation fails. In the case of a customized function, the question becomes do you attempt to free the space so the move can be made or do you just stop and return false so that the move cannot take place? This is really the crux of the issue. Once you cross into this territory you're really going to have to implement two-way hierarchical storage and the whole point of ADO was to provide automatic storage tiering. You're probably better off using heat map and/or business access requirements and building your own hierarchical storage management infrastructure if you really want two way storage tiering. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Storing a looong lookup table

    - by inquisitive
    Background The product i am working on has a very long lookup-table. the table contains static data and cannot be auto generated. there are about 500 rows and 10 columns. columns have mostly integers and strings. to complicate the matters, there are actually two such tables. every row in table-1 maps to zero-or-more rows in table-2. we use an SQLite database with two tables. the product installer places the SQLite file in the installation directory. the application is written in dot-net and we use ADO to load the data once on startup. now, the lookup table grows. in each release a month, we add about 10 new entries existing entries are adjusted. every release we fine tune existing entries. The problem a team of (10) developers work on the lookup table. Code goes in the SVN, but the little devil the SQLite does not. this prevents multiple developers to work on it. we do take regular backups of the file, but proper versioning is not possible. we never know who did the breaking change. the worse thing is we dont know if there is any change at all. diff'ing databases is tedious if not impossible. the tables are expected to grow quite large in years to come and we would need developers to work in parallel on it. the data is business critical. we need to be able to audit changes made to it. Question What would be a solution for the problems outlines above? one idea was to transform the whole thing to XML and treat it like just another source file. that way SVN can do the versioning and we can work in parallel. but the data shows relational behavior. with XML we loose the unique and foreign-key constraints. also we cant query it with sql like ease. any help here will be appreciated.

    Read the article

  • SQL Saturday 194 - Exeter

    - by Dave Ballantyne
    Many kudos goes to Jonathan and Annette Allen and the others on the team for confirming SQL Saturday 194 in Exeter on the 8th and 9th of March.  The event home page is here http://www.sqlsaturday.com/194/eventhome.aspx and I delighted that myself and Dave Morrison will be presenting a full day pre-con on the 8th on favourite subjects “TSQL and Internals”. Here is the full abstract : TSQL and internals - When faced with performance issues there are many lines of attack. Tuning the engine itself can get you so far, however for maximum effect you need to understand how the engine and how it translates SQL statements into performable actions. This is not a simple task, it is a massive task to deal with a multi-table join and the number of permutations can be immense. To back up this knowledge, we can create better performing TSQL and understand the impact that is has upon the engine and recognize the pitfalls and gotcha’s that exist in SQLServer. Ultimately, there is no ‘best way’ to perform a single task only many variations of ‘it depends’ , but now we can pick the most appropriate option for the required dataload. Over the years, there have been many myths and misconceptions have grown around the product, some have basis in older versions and some are just wrong. Continuing to build on the knowledge given so far these issue will be explored and broken down and proved or disproved. Finally we will look to the future and explore SQL Server 2012 and the new functionality that that brings and some of the common uses that we will be able to address. After completion of this days pre-con, attendees will have a more complete knowledge of execution plans, and how they relate to the physical and logical actions that SQLServer will be executing on their behalf. The attendees will also have a more rounded and fuller knowledge of TSQL and the implications of incorrectly defining a query. Dave is a fountain of knowledge on execution plans and optimizer internals and ,though i may flatter myself, I’m no shrinking violet when it comes to TSQL and such matters.  I hope that if you cant join us, then there are other pre-cons available from other experts in their fields that may ‘float you boat’ too.  The pre-con page is http://sqlsouthwest.co.uk/SQLSaturday_precon.htm Also, excitingly, this pre-con day is sponsored by Fusion-IO which is a great boon for the day. If you want a more of this then i am offering a 2 day TSQL course starting on the 19th of March. More details on this are available here

    Read the article

  • JEP 124: Enhance the Certificate Revocation-Checking API

    - by smullan
    Revocation checking is the mechanism to determine the revocation status of a certificate. If it is revoked, it is considered invalid and should not be used. Currently as of JDK 7, the PKIX implementation of java.security.cert.CertPathValidator  includes a revocation checking implementation that supports both OCSP and CRLs, the two main methods of checking revocation. However, there are very few options that allow you to configure the behavior. You can always implement your own revocation checker, but that's a lot of work. JEP 124 (Enhance the Certificate Revocation-Checking API) is one of the 11 new security features in JDK 8. This feature enhances the java.security.cert API to support various revocation settings such as best-effort checking, end-entity certificate checking, and mechanism-specific options and parameters. Let's describe each of these in more detail and show some examples. The features are provided through a new class named PKIXRevocationChecker. A PKIXRevocationChecker instance is returned by a PKIX CertPathValidator as follows: CertPathValidator cpv = CertPathValidator.getInstance("PKIX"); PKIXRevocationChecker prc = (PKIXRevocationChecker)cpv.getRevocationChecker(); You can now set various revocation options by calling different methods of the returned PKIXRevocationChecker object. For example, the best-effort option (called soft-fail) allows the revocation check to succeed if the status cannot be obtained due to a network connection failure or an overloaded server. It is enabled as follows: prc.setOptions(Enum.setOf(Option.SOFT_FAIL)); When the SOFT_FAIL option is specified, you can still obtain any exceptions that may have been thrown due to network issues. This can be useful if you want to log this information or treat it as a warning. You can obtain these exceptions by calling the getSoftFailExceptions method: List<CertPathValidatorException> exceptions = prc.getSoftFailExceptions(); Another new option called ONLY_END_ENTITY allows you to only check the revocation status of the end-entity certificate. This can improve performance, but you should be careful using this option, as the revocation status of CA certificates will not be checked. To set more than one option, simply specify them together, for example: prc.setOptions(Enum.setOf(Option.SOFT_FAIL, Option.ONLY_END_ENTITY)); By default, PKIXRevocationChecker will try to check the revocation status of a certificate using OCSP first, and then CRLs as a fallback. However, you can switch the order using the PREFER_CRLS option, or disable the fallback altogether using the NO_FALLBACK option. For example, here is how you would only use CRLs to check the revocation status: prc.setOptions(Enum.setOf(Option.PREFER_CRLS, Option.NO_FALLBACK)); There are also a number of other useful methods which allow you to specify various options such as the OCSP responder URI, the trusted OCSP responder certificate, and OCSP request extensions. However, one of the most useful features is the ability to specify a cached OCSP response with the setOCSPResponse method. This can be quite useful if the OCSPResponse has already been obtained, for example in a protocol that uses OCSP stapling. After you have set all of your preferred options, you must add the PKIXRevocationChecker to your PKIXParameters object as one of your custom CertPathCheckers before you validate the certificate chain, as follows: PKIXParameters params = new PKIXParameters(keystore); params.addCertPathChecker(prc); CertPathValidatorResult result = cpv.validate(path, params); Early access binaries of JDK 8 can be downloaded from http://jdk8.java.net/download.html

    Read the article

  • Working for Web using open-source Technologies

    - by anirudha
    As a Web Developer we all have own dream to make a great web application. a great application was built upon high discipline and best practice on the process of development then we can make modification easier in future as if we want. the user feedback also have a matter because they tell us what they want or expected with the application we make day and night. sometime they report a nice story , experience or a problem they got with our application. so that's matter because they telling about our application much more because they use our software and a part of process of future development or next version of application we make. so the Web have a good thing that they updated as soon as possible. in desktop application their is a numbers of trouble client have when they want to use our application. first thing that installation of software never goes right on every system. big company spent a big amount of money to troble these problem the user have with their software.   Web application is nice implementation of application because their is no trouble with installation all have same experience and if something goes wrong patch come soon and no waiting for new version. Chrome even a desktop application [browser] but they automatically update themselves so their is no trouble for user to get next version now hasseles.    Web application development in Microsoft way have their own rule , pattern practice to make better application in less time. the technologies i want to show you here is some great opensource example like MySQL jQuery and ASP.NET MVC a framework based on ASP.NET server side language.   For going to next step we need to show you a list of software you need to have to fully experience this tutorial.   Visual Web Developer 2010 Express Edition  MySQL [open-source RDBMS]   Query [open-source javascript library]   for getting these software you need to pay nothing.   Visual Web Developer can obtained from Microsoft.com/Express or if you are student or Web Developer you are eligible to get the Visual studio professional and many other great software from Microsoft through their Dreamspark or WebsiteSpark programmes.   MySQL is a great Relational Database management software who are freely available from MySQL.com as a database monitorting tool you can use MySQL workbrench who can be freely get from MySQL official website or many other free tool are available for begining development with MySQL   jQuery is a great library for making javascipt development easier and faster.you can obtained jQuery from jQuery.com their official website.

    Read the article

  • Taking a Projects Development to the Next Level

    - by user1745022
    I have been looking for some advice for a while on how to handle a project I am working on, but to no avail. I am pretty much on my fourth iteration of improving an "application" I am working on; the first two times were in Excel, the third Time in Access, and now in Visual Studio. The field is manufacturing. The basic idea is I am taking read-only data from a massive Sybase server, filtering it and creating much smaller tables in Access daily (using delete and append Queries) and then doing a bunch of stuff. More specifically, I use a series of queries to either combine data from multiple tables or group data in specific ways (aggregate functions), and then I place this data into a table (so I can sort and manipulate data using DAO.recordset and run multiple custom algorithms). This process is then repeated multiple times throughout the database until a set of relevant tables are created. Many times I will create a field in a query with a value such as 1.1 so that when I append it to a table I can store information in the field from the algorithms. So as the process continues the number of fields for the tables change. The overall application consists of 4 "back-end" databases linked together on a shared drive, with various output (either front-end access applications or Excel). So my question is is this how many data driven applications that solve problems essentially work? Each backend database is updated with fresh data daily and updating each takes around 10 seconds (for three) and 2 minutes(for 1). Project Objectives. I want/am moving to SQL Server soon. Front End will be a Web Application (I know basic web-development and like the administration flexibility) and visual-studio will be IDE with c#/.NET. Should these algorithms be run "inside the database," or using a series of C# functions on each server request. I know you're not supposed to store data in a database unless it is an actual data point, and in Access I have many columns that just hold calculations from algorithms in vba. The truth is, I have seen multiple professional Access applications, and have never seen one that has the complexity or does even close to what mine does (for better or worse). But I know some professional software applications are 1000 times better then mine. So Please Please Please give me a suggestion of some sort. I have been completely on my own and need some guidance on how to approach this project the right way.

    Read the article

  • How to implement a component based system for items in a web game.

    - by Landstander
    Reading several other questions and answers on using a component based system to define items I want to use one for the items and spells in a web game written in PHP. I'm just stuck on the implementation. I'm going to use a DB schema suggested in this series (part 5 describes the schema); http://t-machine.org/index.php/2007/09/03/entity-systems-are-the-future-of-mmog-development-part-1/ This means I'll have an items table with generic item properties, a table listing all of the components for an item and finally records in each component table used to make up the item. Assuming I can select the first two together in a single query, I'm still going to do N queries for each component type. I'm kind of fine with this because I can cache the data into memcache and check there first before doing any queries. I'll need to build up the items on every request they are used in so the implementation needs to be on the lean side even if they're pulled from memcache. But right there is where I feel confident about implementing a component system for my items ends. I figure I'd need to bring attributes and behaviors into the container from each component it uses. I'm just not sure how to do that effectively and not end up writing a lot of specialized code to deal with each component. For example an AttackComponent might need to know how to filter targets inside of a battle context and also maybe provide an attack behavior. That same item might also have a UsableComponent which allows the item to be used and apply some effect onto a different set of targets filtered differently from the same battle context. Then not every part of an item is an active part, an AttributeBonusComponent might need to only kick in when the item is in an equipped state or when displaying the item details page. Ultimately, how should I bring all of the components together into the container so when I use an item as a weapon I get the correct list of targets? Know when a weapon can also be used as an item? Or to apply the bonuses the item provides to a character object? I feel like I've gone too far down the rabbit hole and I can't grasp onto the simple solution in front of me. (If that makes any sense at all.) Likewise if I were to implement the best answer from here I feel like I'd have a lot of the same questions. How to model multiple "uses" (e.g. weapon) for usable-inventory/object/items (e.g. katana) within a relational database.

    Read the article

  • Entity System with C++ templates

    - by tommaisey
    I've been getting interested in the Entity/Component style of game programming, and I've come up with a design in C++ which I'd like a critique of. I decided to go with a fairly pure Entity system, where entities are simply an ID number. Components are stored in a series of vectors - one for each Component type. However, I didn't want to have to add boilerplate code for every new Component type I added to the game. Nor did I want to use macros to do this, which frankly scare me. So I've come up with a system based on templates and type hinting. But there are some potential issues I'd like to check before I spend ages writing this (I'm a slow coder!) All Components derive from a Component base class. This base class has a protected constructor, that takes a string parameter. When you write a new derived Component class, you must initialise the base with the name of your new class in a string. When you first instantiate a new DerivedComponent, it adds the string to a static hashmap inside Component mapped to a unique integer id. When you subsequently instantiate more Components of the same type, no action is taken. The result (I think) should be a static hashmap with the name of each class derived from Component that you instantiate at least once, mapped to a unique id, which can by obtained with the static method Component::getTypeId ("DerivedComponent"). Phew. The next important part is TypedComponentList<typename PropertyType>. This is basically just a wrapper to an std::vector<typename PropertyType> with some useful methods. It also contains a hashmap of entity ID numbers to slots in the array so we can find Components by their entity owner. Crucially TypedComponentList<> is derived from the non-template class ComponentList. This allows me to maintain a list of pointers to ComponentList in my main ComponentManager, which actually point to TypedComponentLists with different template parameters (sneaky). The Component manager has template functions such as: template <typename ComponentType> void addProperty (ComponentType& component, int componentTypeId, int entityId) and: template <typename ComponentType> TypedComponentList<ComponentType>* getComponentList (int componentTypeId) which deal with casting from ComponentList to the correct TypedComponentList for you. So to get a list of a particular type of Component you call: TypedComponentList<MyComponent>* list = componentManager.getComponentList<MyComponent> (Component::getTypeId("MyComponent")); Which I'll admit looks pretty ugly. Bad points of the design: If a user of the code writes a new Component class but supplies the wrong string to the base constructor, the whole system will fail. Each time a new Component is instantiated, we must check a hashed string to see if that component type has bee instantiated before. Will probably generate a lot of assembly because of the extensive use of templates. I don't know how well the compiler will be able to minimise this. You could consider the whole system a bit complex - perhaps premature optimisation? But I want to use this code again and again, so I want it to be performant. Good points of the design: Components are stored in typed vectors but they can also be found by using their entity owner id as a hash. This means we can iterate them fast, and minimise cache misses, but also skip straight to the component we need if necessary. We can freely add Components of different types to the system without having to add and manage new Component vectors by hand. What do you think? Do the good points outweigh the bad?

    Read the article

< Previous Page | 816 817 818 819 820 821 822 823 824 825 826 827  | Next Page >