Search Results

Search found 21589 results on 864 pages for 'primary key'.

Page 644/864 | < Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >

  • Somebody is storing credit card data - how are they doing it?

    - by pygorex1
    Storing credit card information securely and legally is very difficult and should not be attempted. I have no intention of storing credit card data but I'm dying to figure out the following: My credit card info is being stored on a server some where in he tworld. This data is (hopefully) not being stored on a merchant's server, but at some point it needs to be stored to verify and charge the account identified by merchant submitted data. My question is this: if you were tasked with storing credit card data what encryption strategy would you use to secure the data on-disk? From what I can tell submitted credit card info is being checked more or less in real time. I doubt that any encryption key used to secure the data is being entered manually, so decryption is being done on the fly, which implies that the keys themselves are being stored on-disk. How would you secure your data and your keys in an automated system like this?

    Read the article

  • jquery select iframe children

    - by Val
    I am using the editArea library and jquery to do what i need... http://www.cdolivet.com/index.php?page=editArea&sess=2b8243f679e0d472397bfa959e1d3841 so in my html there is an iframe tag that editArea uses what i need is to access something like so with jquery $('iframe textarea').keydown(function (e){ number = 17; //any number really :) if(e.which == number){ //do something... alert('Done...'); } }); I tried the above but it looks like it is not detecting that key. but it works if selector was $(document) therefore the rest of the function works it's just it's not picking up the iframes textarea keydown any ideas? Thanks

    Read the article

  • How would I do this in SQL?

    - by bergyman
    Let's say I have the following tables: PartyRelationship EffectiveDatedAttributes PartyRelationship contains a varchar column called class. EffectiveDatedAttributes contains a foreign key to PartyRelationship called ProductAuthorization. If I run this: select unique eda.productauthorization from effectivedatedattributes eda inner join partyrelationship pr on eda.productauthorization = pr.ID where pr.class = 'com.pmc.model.bind.ProductAuthorization' it returns a list of ProductAuthorization IDs. I need to take this list of IDs and insert a new row into EffectiveDatedAttributes for every ID, containing the ID as well as some other data. How would I iterate over the returned IDs from the previous select statement in order to do this?

    Read the article

  • How can I programmatically obtain the company info used to digitally sign an assembly in .NET?

    - by chaiguy
    As a means of simple security, I was previously checking the digital signature of a downloaded update package for my program against its public key to ensure that it originated from me. However, as I'm using cheap code signing certs (Tucows), I am unable to renew an existing cert and therefore the keys change every time I need to renew. Therefore, a more reliable means would be to verify the organization information embedded in the signed assembly (which is displayed in the UAC dialog) against my well-known organization string, as this will continue to be the same. Does anyone know how to obtain this information from a digitally-signed assembly?

    Read the article

  • How do i write a sql query with user input and wildcards

    - by acidzombie24
    Usually i write my where statements as WHERE key=@0 then add a param. Now i would like the user to specific a few letters such as 'oat' and would like to stick wildcards around it %oat% which matches 'coating'. So how do i write the query so wildcards are around the user input (the 'seachword'). Now lets take it a step further. I would not like the user to write % so he cannot do blah%ing. Perhaps % is part of the sentence (or it could be flat out illegal but i prefer it be part of the sentence). How do i put my wildcards around a word or sentence and disallow the user from putting the wildcard between his words? (and preferably make % part of the sentence) C# ado.net sql/sqlite

    Read the article

  • How to implement a graph-structured stack?

    - by Emil
    Ok, so I would like to make a GLR parser generator. I know there exist such programs better than what I will probably make, but I am doing this for fun/learning so that's not important. I have been reading about GLR parsing and I think I have a decent high level understanding of it now. But now it's time to get down to business. The graph-structured stack (GSS) is the key data structure for use in GLR parsers. Conceptually I know how GSS works, but none of the sources I looked at so far explain how to implement GSS. I don't even have an authoritative list of operations to support. Can someone point me to some good sample code/tutorial for GSS? Google didn't help so far. I hope this question is not too vague.

    Read the article

  • Change background color of disabled listbox in windows classic theme

    - by Mercury821
    I am developing a WPF application that must run using Windows Classic theme. The application creates a dialog box containing a ListBox. When the dialog box is shown, it must be disabled for 1s before accepting any input. I am accomplishing this with a style trigger, and it works. However, the ListBox shows a white background when it's disabled, which I can't seem to get rid of. When using the aero theme, the following style resource fixes the issue: <SolidColorBrush x:Key="{x:Static SystemColors.ControlBrushKey}" Color="Transparent"/> But when using Windows Classic theme, the white background reappears. How can i remedy the situation for Classic theme???

    Read the article

  • Parse values from a text file in C

    - by Mohit Deshpande
    Say I have written to a text file in this format: key1/value1 key2/value2 akey/withavalue anotherkey/withanothervalue I have a linked list like: struct Node { char *key; char *value; struct Node *next; }; to hold the values. How would I read key1 and value1? I was thinking of putting line by line in a buffer and using strtok(buffer, '/'). Would that work? What other ways could work, maybe a bit faster or less prone to error? Please include a code sample if you can!

    Read the article

  • WPF - Binding a color resource to the data object within a DataTemplate

    - by John
    I have a DataTemplate and a SolidColorBrush in the DataTemplate.Resources section. I want to bind the color to a property of the same data object that the DataTemplate itself is bound to. However, this does not work. The brush is ignored. Why? <DataTemplate DataType="{x:Type data:MyData}" x:Name="dtData"> <DataTemplate.Resources> <SolidColorBrush x:Key="bg" Color="{Binding Path=Color, Converter={StaticResource colorConverter}" /> </DataTemplate.Resources> <Border CornerRadius="15" Background="{StaticResource bg}" Margin="0" Opacity="0.5" Focusable="True"> </DataTemplate>

    Read the article

  • Paging over a lazy-loaded collection with NHibernate

    - by HackedByChinese
    I read this article where Ayende states NHibernate can (compared to EF 4): Collection with lazy=”extra” – Lazy extra means that NHibernate adapts to the operations that you might run on top of your collections. That means that blog.Posts.Count will not force a load of the entire collection, but rather would create a “select count(*) from Posts where BlogId = 1” statement, and that blog.Posts.Contains() will likewise result in a single query rather than paying the price of loading the entire collection to memory. Collection filters and paged collections - this allows you to define additional filters (including paging!) on top of your entities collections, which means that you can easily page through the blog.Posts collection, and not have to load the entire thing into memory. So I decided to put together a test case. I created the cliché Blog model as a simple demonstration, with two classes as follows: public class Blog { public virtual int Id { get; private set; } public virtual string Name { get; set; } public virtual ICollection<Post> Posts { get; private set; } public virtual void AddPost(Post item) { if (Posts == null) Posts = new List<Post>(); if (!Posts.Contains(item)) Posts.Add(item); } } public class Post { public virtual int Id { get; private set; } public virtual string Title { get; set; } public virtual string Body { get; set; } public virtual Blog Blog { get; private set; } } My mappings files look like this: <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" default-access="property" auto-import="true" default-cascade="none" default-lazy="true"> <class xmlns="urn:nhibernate-mapping-2.2" name="Model.Blog, TestEntityFramework, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" table="Blogs"> <id name="Id" type="System.Int32, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Id" /> <generator class="identity" /> </id> <property name="Name" type="System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Name" /> </property> <property name="Type" type="System.Int32, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Type" /> </property> <bag lazy="extra" name="Posts"> <key> <column name="Blog_Id" /> </key> <one-to-many class="Model.Post, TestEntityFramework, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </bag> </class> </hibernate-mapping> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" default-access="property" auto-import="true" default-cascade="none" default-lazy="true"> <class xmlns="urn:nhibernate-mapping-2.2" name="Model.Post, TestEntityFramework, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" table="Posts"> <id name="Id" type="System.Int32, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Id" /> <generator class="identity" /> </id> <property name="Title" type="System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Title" /> </property> <property name="Body" type="System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Body" /> </property> <many-to-one class="Model.Blog, TestEntityFramework, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="Blog"> <column name="Blog_id" /> </many-to-one> </class> </hibernate-mapping> My test case looks something like this: using (ISession session = Configuration.Current.CreateSession()) // this class returns a custom ISession that represents either EF4 or NHibernate { blogs = (from b in session.Linq<Blog>() where b.Name.Contains("Test") orderby b.Id select b); Console.WriteLine("# of Blogs containing 'Test': {0}", blogs.Count()); Console.WriteLine("Viewing the first 5 matching Blogs."); foreach (Blog b in blogs.Skip(0).Take(5)) { Console.WriteLine("Blog #{0} \"{1}\" has {2} Posts.", b.Id, b.Name, b.Posts.Count); Console.WriteLine("Viewing first 5 matching Posts."); foreach (Post p in b.Posts.Skip(0).Take(5)) { Console.WriteLine("Post #{0} \"{1}\" \"{2}\"", p.Id, p.Title, p.Body); } } } Using lazy="extra", the call to b.Posts.Count does do a SELECT COUNT(Id)... which is great. However, b.Posts.Skip(0).Take(5) just grabs all Posts for Blog.Id = ?id, and then LINQ on the application side is just taking the first 5 from the resulting collection. What gives?

    Read the article

  • Override tooltip text for Titlebar buttons (Close, Maximize, Minimize, Help)

    - by Tim
    I have been trying without luck to change the text of the tooltip that appears for the buttons on the main title bar of a form. In a nutshell, we have harnessed the 'Help' button for Windows Forms to have some other purpose. This is working fine. The issue is that when hovering the mouse over that button, a 'Help' tooltip appears, which doesn't make any sense for the application. Ideally, there would be some way to change the text of that tooltip for my application; however, at this point I would be satisfied just finding a way to disable the tooltips altogether. I know that you can disable the tooltips for the entire OS by modifying the 'UserPreferencesMask' key in regedit, but I would really like a way to have this only affect my application. Again, ideally there would be some way to do this with managed code, but I would not be opposed to linking into the Windows API or the like. Thanks for any suggestions for resolving this issue!

    Read the article

  • How apply a storyboard to a Label programmatically ?

    - by ThitoO
    Hi :), I've a little problem, I'd search google and my Wpf's books but I don't found any answer :( I have created a little storyboard : <Storyboard x:Key="whiteAnim" Duration="1"> <ColorAnimation By="Black" To="White" Storyboard.TargetProperty="Background" x:Name="step1"/> <ColorAnimation By="White" To="Black" Storyboard.TargetProperty="Background" x:Name="step2"/> </Storyboard> This animation will change background color from black to white, and from white to black. I want to "apply" this storyboard to a Label : Label label = new Label(); label.Content = "My label"; I'm looking for a method like "label.StartStoryboard(--myStoryboard--), do you have any ideas ? Thank you :)

    Read the article

  • Visual Studio DataSet Designer keep queries

    - by LnDCobra
    In visual studio datasource designer(The screen where you have all the UML Diagrams including relations) is there any way to refresh a table and its relations/foreign key constraints without refreshing the whole table? The way I am doing it at the moment is removing the table and adding it again. This adds all the relations and refreshes all fields. Also if I change a fields data type, is there a way to automatically refresh all the fields in the datasource? Again without deleting the table and adding it again. Reason for this is because some of my TableAdapters have quite a number of complex queries attached to them and when I remove the table the adapter gets removed as well including all its queries. I am using Visual Studio 2008 and connecting to a MySQL database.

    Read the article

  • Help converting some classic asp code to c# (.net 3.5)

    - by xoail
    I have this block of code in a .asp file that I am struggling to convert to c#... can anyone help me? Function EncodeCPT(ByVal sPinCode, ByVal iOfferCode, ByVal sShortKey, ByVal sLongKey) Dim vob(2), encodeModulo(256), decodeX, ocode decodeX = " abcdefghijklmnopqrstuvwxyz0123456789!$%()*+,-.@;<=>?[]^_{|}~" if len(iOfferCode) = 5 then ocode = iOfferCode Mod 10000 else ocode = iOfferCode end if vob(1) = ocode Mod 100 vob(0) = Int((ocode-vob(1)) / 100) For i = 1 To 256 encodeModulo(i) = 0 Next For i = 0 To 60 encodeModulo(asc(mid(decodeX, i + 1, 1))) = i Next 'append offer code to key sPinCode = lcase(sPinCode) & iOfferCode If Len(sPinCode) < 20 Then sPinCode = Left(sPinCode & " couponsincproduction", 20) End If 'encode Dim i, q, j, k, sCPT, s1, s2, s3 i = 0 q = 0 j = Len(sPinCode) k = Len(sShortKey) sCPT = "" For i = 1 To j s1 = encodeModulo(asc( mid(sPinCode, i, 1)) ) s2 = 2 * encodeModulo( asc( mid(sShortKey, 1 + ((i - 1) Mod k), 1) ) ) s3 = vob(i Mod 2) q = (q + s1 + s2 + s3) Mod 61 sCPT = sCPT & mid(sLongKey, q + 1, 1) Next EncodeCPT = sCPT End Function

    Read the article

  • RPX API Call auth_info is returning "missing parameter" error

    - by James Lawruk
    I cannot get the RPX auth_info API call to work. It keeps returning the error: "Missing parameter: apiKey" I am using the C# RPX Helper Class provided on their Wiki:RPX Helper Class Below is my code in my Page_Load method. The RPX service works by sending a POST to a Url that I specify. My code gets the token from the post data shown below. Then I call the AuthInfo API method. string token = Request.Params["token"]; string apiKey = "xxxxxxxxxxxxxxx"; //my API key Rpx rpx = new Rpx(apiKey, "http://rpxnow.com"); XmlElement xmlElement = rpx.AuthInfo(token); Everything looks good. The token is populated. Within their code, the "apiKey" value pair is added to the post data written to the Request stream. Has anyone had luck with this? Any ideas why this is not working? Thanks.

    Read the article

  • Monitoring Events in your BPEL Runtime - RSS Feeds?

    - by Ramkumar Menon
    @10g - It had been a while since I'd tried something different. so here's what I did this week!Whenever our Developers deployed processes to the BPEL runtime, or perhaps when a process gets turned off due to connectivity issues, or maybe someone retired a process, I needed to know. So here's what I did. Step 1: Downloaded Quartz libraries and went through the documentation to understand what it takes to schedule a recurring job. Step 2: Cranked out two components using Oracle JDeveloper. [Within a new Web Project] a) A simple Java Class named FeedUpdater that extends org.quartz.Job. All this class does is to connect to your BPEL Runtime [via opmn:ormi] and fetch all events that occured in the last "n" minutes. events? - If it doesn't ring a bell - its right there on the BPEL Console. If you click on "Administration > Process Log" - what you see are events.The API to retrieve the events is //get the locator reference for the domain you are interested in.Locator l = .... //Predicate to retrieve events for last "n" minutesWhereCondition wc = new WhereCondition(...) //get all those events you needed.BPELProcessEvent[] events = l.listProcessEvents(wc); After you get all these events, write out these into an RSS Feed XML structure and stream it into a file that resides either in your Apache htdocs, or wherever it can be accessed via HTTP.You can read all about RSS 2.0 here. At a high level, here is how it looks like. <?xml version = '1.0' encoding = 'UTF-8'?><rss version="2.0">  <channel>    <title>Live Updates from the Development Environment</title>    <link>http://soadev.myserver.com/feeds/</link>    <description>Live Updates from the Development Environment</description>    <lastBuildDate>Fri, 19 Nov 2010 01:03:00 PST</lastBuildDate>    <language>en-us</language>    <ttl>1</ttl>    <item>      <guid>1290213724692</guid>      <title>Process compiled</title>      <link>http://soadev.myserver.com/BPELConsole/mdm_product/administration.jsp?mode=processLog&amp;processName=&amp;dn=all&amp;eventType=all&amp;eventDate=600&amp;Filter=++Filter++</link>      <pubDate>Fri Nov 19 00:00:37 PST 2010</pubDate>      <description>SendPurchaseOrderRequestService: 3.0 Time : Fri Nov 19 00:00:37                   PST 2010</description>    </item>   ...... </channel> </rss> For writing ut XML content, read through Oracle XML Parser APIs - [search around for oracle.xml.parser.v2] b) Now that my "Job" was done, my job was half done. Next, I wrote up a simple Scheduler Servlet that schedules the above "Job" class to be executed ever "n" minutes. It is very straight forward. Here is the primary section of the code.           try {        Scheduler sched = StdSchedulerFactory.getDefaultScheduler();         //get n and make a trigger that executes every "n" seconds        Trigger trigger = TriggerUtils.makeSecondlyTrigger(n);        trigger.setName("feedTrigger" + System.currentTimeMillis());        trigger.setGroup("feedGroup");                JobDetail job = new JobDetail("SOA_Feed" + System.currentTimeMillis(), "feedGroup", FeedUpdater.class);        sched.scheduleJob(job,trigger);         }catch(Exception ex) {            ex.printStackTrace();            throw new ServletException(ex.getMessage());        } Look up the Quartz API and documentation. It will make this look much simpler.   Now that both components were ready, I packaged the Application into a war file and deployed it onto my Application Server. When the servlet initialized, the "n" second schedule was set/initialized. From then on, the servlet kept populating the RSS Feed file. I just ensured that my "Job" code keeps only 30 latest events within it, so that the feed file is small and under control. [a few kbs]   Next I opened up the feed xml on my browser - It requested a subscription - and Here I was - watching new deployments/life cycle events all popping up on my browser toolbar every 5 (actually n)  minutes!   Well, you could do it on a browser/reader of your choice - or perhaps read them like you read an email on your thunderbird!.      

    Read the article

  • Django Per-site caching using memcached

    - by Paul
    Hi, So I'm using per-site caching on a project and I've observed the following, which is kind of confusing. When I load a flat page in my browser then change it through admin and then do a refresh (within the cache timeout) there is no change in the page--as expected. However when I stat a new session in a different browser and load the page (still within the timeout) the app is hit instead of the cache, with the Isn't the cache key generated from the URL? it seems that the session state is getting in there somewhere, which is causing a cache miss. Any ideas? thanks MIDDLEWARE_CLASSES = ( 'django.middleware.cache.UpdateCacheMiddleware', 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.middleware.gzip.GZipMiddleware', 'django.middleware.http.ConditionalGetMiddleware', 'django.middleware.doc.XViewMiddleware', 'ittybitty.middleware.IttyBittyURLMiddleware', 'django.contrib.flatpages.middleware.FlatpageFallbackMiddleware', 'maintenancemode.middleware.MaintenanceModeMiddleware', 'djangodblog.middleware.DBLogMiddleware', 'SSL.middleware.SSLRedirect', #SSL middleware to handle SSL 'django.middleware.cache.FetchFromCacheMiddleware', )

    Read the article

  • IBM "per core" comparisons for SPECjEnterprise2010

    - by jhenning
    I recently stumbled upon a blog entry from Roman Kharkovski (an IBM employee) comparing some SPECjEnterprise2010 results for IBM vs. Oracle. Mr. Kharkovski's blog claims that SPARC delivers half the transactions per core vs. POWER7. Prior to any argument, I should say that my predisposition is to like Mr. Kharkovski, because he says that his blog is intended to be factual; that the intent is to try to avoid marketing hype and FUD tactic; and mostly because he features a picture of himself wearing a bike helmet (me too). Therefore, in a spirit of technical argument, rather than FUD fight, there are a few areas in his comparison that should be discussed. Scaling is not free For any benchmark, if a small system scores 13k using quantity R1 of some resource, and a big system scores 57k using quantity R2 of that resource, then, sure, it's tempting to divide: is  13k/R1 > 57k/R2 ? It is tempting, but not necessarily educational. The problem is that scaling is not free. Building big systems is harder than building small systems. Scoring  13k/R1  on a little system provides no guarantee whatsoever that one can sustain that ratio when attempting to handle more than 4 times as many users. Choosing the denominator radically changes the picture When ratios are used, one can vastly manipulate appearances by the choice of denominator. In this case, lots of choices are available for the resource to be compared (R1 and R2 above). IBM chooses to put cores in the denominator. Mr. Kharkovski provides some reasons for that choice in his blog entry. And yet, it should be noted that the very concept of a core is: arbitrary: not necessarily comparable across vendors; fluid: modern chips shift chip resources in response to load; and invisible: unless you have a microscope, you can't see it. By contrast, one can actually see processor chips with the naked eye, and they are a bit easier to count. If we put chips in the denominator instead of cores, we get: 13161.07 EjOPS / 4 chips = 3290 EjOPS per chip for IBM vs 57422.17 EjOPS / 16 chips = 3588 EjOPS per chip for Oracle The choice of denominator makes all the difference in the appearance. Speaking for myself, dividing by chips just seems to make more sense, because: I can see chips and count them; and I can accurately compare the number of chips in my system to the count in some other vendor's system; and Tthe probability of being able to continue to accurately count them over the next 10 years of microprocessor development seems higher than the probability of being able to accurately and comparably count "cores". SPEC Fair use requirements Speaking as an individual, not speaking for SPEC and not speaking for my employer, I wonder whether Mr. Kharkovski's blog article, taken as a whole, meets the requirements of the SPEC Fair Use rule www.spec.org/fairuse.html section I.D.2. For example, Mr. Kharkovski's footnote (1) begins Results from http://www.spec.org as of 04/04/2013 Oracle SUN SPARC T5-8 449 EjOPS/core SPECjEnterprise2010 (Oracle's WLS best SPECjEnterprise2010 EjOPS/core result on SPARC). IBM Power730 823 EjOPS/core (World Record SPECjEnterprise2010 EJOPS/core result) The questionable tactic, from a Fair Use point of view, is that there is no such metric at the designated location. At www.spec.org, You can find the SPEC metric 57422.17 SPECjEnterprise2010 EjOPS for Oracle and You can also find the SPEC metric 13161.07 SPECjEnterprise2010 EjOPS for IBM. Despite the implication of the footnote, you will not find any mention of 449 nor anything that says 823. SPEC says that you can, under its fair use rule, derive your own values; but it emphasizes: "The context must not give the appearance that SPEC has created or endorsed the derived value." Substantiation and transparency Although SPEC disclaims responsibility for non-SPEC information (section I.E), it says that non-SPEC data and methods should be accurate, should be explained, should be substantiated. Unfortunately, it is difficult or impossible for the reader to independently verify the pricing: Were like units compared to like (e.g. list price to list price)? Were all components (hw, sw, support) included? Were all fees included? Note that when tpc.org shows IBM pricing, there are often items such as "PROCESSOR ACTIVATION" and "MEMORY ACTIVATION". Without the transparency of a detailed breakdown, the pricing claims are questionable. T5 claim for "Fastest Processor" Mr. Kharkovski several times questions Oracle's claim for fastest processor, writing You see, when you publish industry benchmarks, people may actually compare your results to other vendor's results. Well, as we performance people always say, "it depends". If you believe in performance-per-core as the primary way of looking at the world, then yes, the POWER7+ is impressive, spending its chip resources to support up to 32 threads (8 cores x 4 threads). Or, it just might be useful to consider performance-per-chip. Each SPARC T5 chip allows 128 hardware threads to be simultaneously executing (16 cores x 8 threads). The Industry Standard Benchmark that focuses specifically on processor chip performance is SPEC CPU2006. For this very well known and popular benchmark, SPARC T5: provides better performance than both POWER7 and POWER7+, for 1 chip vs. 1 chip, for 8 chip vs. 8 chip, for integer (SPECint_rate2006) and floating point (SPECfp_rate2006), for Peak tuning and for Base tuning. For example, at the 8-chip level, integer throughput (SPECint_rate2006) is: 3750 for SPARC 2170 for POWER7+. You can find the details at the March 2013 BestPerf CPU2006 page SPEC is a trademark of the Standard Performance Evaluation Corporation, www.spec.org. The two specific results quoted for SPECjEnterprise2010 are posted at the URLs linked from the discussion. Results for SPEC CPU2006 were verified at spec.org 1 July 2013, and can be rechecked here.

    Read the article

  • NHibernate ICriteria with a bag

    - by plunk
    Hi, Just a quick question. If I've got 2 tables that are joined in a 3rd table with a many-to-many relationship, is it possible to write an ICriteria with expressions in one of the tables and the join table? Lets say the mapping file looks something like: <bag name ="Bag" table="JoinTable" cascade ="none"> <key column="Data_ID"/> <many-to-many class="Data2" column="Data2_ID"/> </bag> Is it then possible to write an ICriteria like the following? ICriteria crit = session.CreateCriteria(typeof(Data)); crit.Add(Expression.Eq("Name", name)); crit.Add(Expression.Between("Date", startDate, endDate)); crit.Add(Expression.Eq("Bag", data2IDNumber)); When I try this, it tells me I the expected type is IList, whereas the actual type is Bag. Thanks.

    Read the article

  • Amazon S3 Change file download name

    - by Daveo
    I have files stored on S3 with a GUID as the key name. I am using a pre signed URL to download as per S3 REST API I store the original file name in my own Database. When a user clicks to download a file from my web application I want to return their original file name, but currently all they get is a GUID. How can I achieve this? My web app is in salesforce so I do not have much control to do response.redirects all download the file to the web server then rename it due to governor limitations. Is there some HTML redirect, meta refresh, Javascript I can use? Is there some way to change the download file name for S3 (the only thing I can think of is coping the object to a new name, downloading it, then deleting it). I want to avoid creating a bucket per user as we will have a lot of users and still no guarantee each file with in each bucket will have a unique name Any other solutions?

    Read the article

  • SqlPlus on mac osx 10.6 doesn't work

    - by lesce
    When i try to run this #sqlplus system@orcl it gives me this error SQL*Plus: Release 10.1.0.3.0 - Production on Tue Apr 20 02:24:41 2010 Copyright (c) 1982, 2004, Oracle. All rights reserved. Enter password: ERROR: ORA-12154: TNS:could not resolve the connect identifier specified the oracle server is working , I can connect through SQLDeveloper My .profile looks like this export PATH=/opt/local/bin:/opt/local/sbin:$PATH # Setting PATH for Python 3.1 # The orginal version is saved in .profile.pysave PATH="/Library/Frameworks/Python.framework/Versions/3.1/bin:${PATH}" DYLD_LIBRARY_PATH=/Users/lesce/instantclient export TNS_ADMIN=/Users/lesce/instantclient export ORACLE_SID="orcl" export DYLD_LIBRARY_PATH export PATH=$PATH:/Users/lesce/instantclient tnsnames.ora ORCL = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) listener.ora SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /Users/oracle/oracle/product/10.2.0/db_1) (PROGRAM = extproc) ) (SID_DESC = (SID_NAME = orcl) (ORACLE_HOME = /Users/oracle/oracle/product/10.2.0/db_1) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0)) ) ) My SqlDeveloper configuration username : sys role : sysdba connection type : basic hostname : localhost port : 1521 sid : orcl

    Read the article

  • Developing Schema Compare for Oracle (Part 3): Ghost Objects

    - by Simon Cooper
    In the previous blog post, I covered how we solved the problem of dependencies between objects and between schemas. However, that isn’t the end of the issue. The dependencies algorithm I described works when you’re querying live databases and you can get dependencies for a particular schema direct from the server, and that’s all well and good. To throw a (rather large) spanner in the works, Schema Compare also has the concept of a snapshot, which is a read-only compressed XML representation of a selection of schemas that can be compared in the same way as a live database. This can be useful for keeping historical records or a baseline of a database schema, or comparing a schema on a computer that doesn’t have direct access to the database. So, how do snapshots interact with dependencies? Inter-database dependencies don't pose an issue as we store the dependencies in the snapshot. However, comparing a snapshot to a live database with cross-schema dependencies does cause a problem; what if the live database has a dependency to an object that does not exist in the snapshot? Take a basic example schema, where you’re only populating SchemaA: SOURCE   TARGET (using snapshot) CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); In this case, we want to generate a sync script to synchronize SchemaA.Table1 on the database represented by the snapshot. When taking a snapshot, database dependencies are followed, but because you’re not comparing it to anything at the time, the comparison dependencies algorithm described in my last post cannot be used. So, as you only take a snapshot of SchemaA on the target database, SchemaB.Table1 will not be in the snapshot. If this snapshot is then used to compare against the above source schema, SchemaB.Table1 will be included in the source, but the object will not be found in the target snapshot. This is the same problem that was solved with comparison dependencies, but here we cannot use the comparison dependencies algorithm as the snapshot has not got any information on SchemaB! We've now hit quite a big problem - we’re trying to include SchemaB.Table1 in the target, but we simply do not know the status of this object on the database the snapshot was taken from; whether it exists in the database at all, whether it’s the same as the target, whether it’s different... What can we do about this sorry state of affairs? Well, not a lot, it would seem. We can’t query the original database, as it may not be accessible, and we cannot assume any default state as it could be wrong and break the script (and we currently do not have a roll-back mechanism for failed synchronizes). The only way to fix this properly is for the user to go right back to the start and re-create the snapshot, explicitly including the schemas of these 'ghost' objects. So, the only thing we can do is flag up dependent ghost objects in the UI, and ask the user what we should do with it – assume it doesn’t exist, assume it’s the same as the target, or specify a definition for it. Unfortunately, such functionality didn’t make the cut for v1 of Schema Compare (as this is very much an edge case for a non-critical piece of functionality), so we simply flag the ghost objects up in the sync wizard as unsyncable, and let the user sort out what’s going on and edit the sync script as appropriate. There are some things that we do do to alleviate somewhat this rather unhappy situation; if a user creates a snapshot from the source or target of a database comparison, we include all the objects registered from the database, not just the ones in the schemas originally selected for comparison. This includes any extra dependent objects registered through the comparison dependencies algorithm. If the user then compares the resulting snapshot against the same database they were comparing against when it was created, the extra dependencies will be included in the snapshot as required and everything will be good. Fortunately, this problem will come up quite rarely, and only when the user uses snapshots and tries to sync objects with unknown cross-schema dependencies. However, the solution is not an easy one, and lead to some difficult architecture and design decisions within the product. And all this pain follows from the simple decision to allow schema pre-filtering! Next: why adding a column to a table isn't as easy as you would think...

    Read the article

  • Kohana ORM Aliasing and "Trying to get property of non-object"

    - by Toto
    I have the following tables in the database: teams: id name matches: id team1_id team2_id I've defined the following ORM models in my Kohana application: class Match_Model extends ORM { protected $belongs_to = array('team1_id' => 'team', 'team2_id' => 'team'); } class Team_Model extends ORM { protected $has_many = array('matches'); } The following code in a controller: $match = ORM::factory('match',1); echo $match->team1_id->name; /* <-- */ Is throwing the following error on the linke marked with /* <--- */: Trying to get property of non-object The framework is yielding the value of the foreign key instead of a reference to a Match_Model instance as it should (giving the has_many and belongs_to properties stated). Am I missing something? Note: Just in case, I've added the irregular plural 'match' => 'matches' in application/config/inflector.php

    Read the article

  • How to setup default attributes in a ruby model

    - by webdestroya
    I have a model User and when I create one, I want to pragmatically setup some API keys and what not, specifically: @user.apikey = Digest::MD5.hexdigest(BCrypt::Password.create("jibberish").to_s) I want to be able to run User.create!(:email=>"[email protected]") and have it create a user with a randomly generated API key, and secret. I currently am doing this in the controller, but when I tried to add a default user to the seeds.rb file, I am getting an SQL error (saying my apikey is null). I tried overriding the save definition, but that seemed to cause problems when I updated the model, because it would override the values. I tried overriding the initialize definition, but that is returning a nil:NilClass and breaking things. Is there a better way to do this?

    Read the article

  • Adding 90000 XElement to XDocument

    - by Jon
    I have a Dictionary<int, MyClass> It contains 100,000 items 10,000 items value is populated whilst 90,000 are null. I have this code: var nullitems = MyInfoCollection.Where(x => x.Value == null).ToList(); nullitems.ForEach(x => LogMissedSequenceError(x.Key + 1)); private void LogMissedSequenceError(long SequenceNumber) { DateTime recordTime = DateTime.Now; var errors = MyXDocument.Descendants("ERRORS").FirstOrDefault(); if (errors != null) { errors.Add( new XElement("ERROR", new XElement("DATETIME", DateTime.Now.ToString("dd/MM/yyyy HH:mm:ss:fff")), new XElement("DETAIL", "No information was read for expected sequence number " + SequenceNumber), new XAttribute("TYPE", "MISSED"), new XElement("PAGEID", SequenceNumber) ) ); } } This seems to take about 2 minutes to complete. I can't seem to find where the bottleneck might be or if this timing sounds about right? Can anyone see anything to why its taking so long?

    Read the article

< Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >