Search Results

Search found 14719 results on 589 pages for 'optimization level'.

Page 420/589 | < Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >

  • defining a simple implicit Arbitary

    - by FredOverflow
    I have a type Foo with a constructor that takes an Int. How do I define an implicit Arbitrary for Foo to be used with scalacheck? implicit def arbFoo: Arbitrary[Foo] = ??? I came up with the following solution, but it's a bit too "manual" and low-level for my taste: val fooGen = for (i <- Gen.choose(Int.MinValue, Int.MaxValue)) yield new Foo(i) implicit def arbFoo: Arbitrary[Foo] = Arbitrary(fooGen) Ideally, I would want a higher-order function where I just have to plug in an Int => Foo function. I managed to cut it down to: implicit def arbFoo = Arbitrary(Gen.resultOf((i: Int) => new Foo(i))) But I still feel like there has got to be a slightly simpler way.

    Read the article

  • Ignoring generated files when using "Treat warnings as errors"

    - by krystan honour
    We have started a new project but also have this problem for an existing project. The problem is that when we compile with a warning level of 4 we also want to switch on 'Treat all warnings as errors' We are unable to do this at the moment because generated files (in particular reference.cs files) are missing things like XML comments and this generates a warning, we do not want to suppress the xml comment warnings totally out of all files just for specific types of files (namely generated code). I have thought of a way this could be achieved but am not sure if these are the best way to do this or indeed where to start :) My thinking is that we need to do something with T4 templates for the code that is generated such that it does fill in XML documentation for generated code. Does anyone have any ideas, currently I'm at well over 2k warnings (its a big project) :(

    Read the article

  • how to substract numbers from levels

    - by romunov
    Dear SOFers, I would like to cut a vector of values ranging 0-70 to x number of categories, and would like the upper limit of each category. So far, I have tried this using cut() and am trying to extract the limits from levels. I have a list of levels, from which I would like to extract the second number from each level. How can I extract the values between space and ] (which is the number I'm interested in)? I have: > levels(bins) [1] "(-0.07,6.94]" "(6.94,14]" "(14,21]" "(21,28]" "(28,35]" [6] "(35,42]" "(42,49]" "(49,56]" "(56,63.1]" "(63.1,70.1]" and would like to get: [1] 6.94 14 21 28 35 42 49 56 63.1 70.1 Or is there a better way of calculating the upper bounds of categories?

    Read the article

  • Optimal Sharing of heavy computation job using Snow and/or multicore

    - by James
    Hi, I have the following problem. First my environment, I have two 24-CPU servers to work with and one big job (resampling a large dataset) to share among them. I've setup multicore and (a socket) Snow cluster on each. As a high-level interface I'm using foreach. What is the optimal sharing of the job? Should I setup a Snow cluster using CPUs from both machines and split the job that way (i.e. use doSNOW for the foreach loop). Or should I use the two servers separately and use multicore on each server (i.e. split the job in two chunks, run them on each server and then stich it back together). Basically what is an easy way to: 1. Keep communication between servers down (since this is probably the slowest bit). 2. Ensure that the random numbers generated in the servers are not highly correlated.

    Read the article

  • how to debug MySql stored procs without breaking control flow from application

    - by M.Taha Masood
    Is there a way to do the following: I have a MySQL DB , and there are many stored procs written in it as well. I use MySQL client library in C to connect to this DB and amongst other things , call the stored procedures. Is there a way to set breakpoints in the stored procedures such that when the call is made from C program ( using mySql client library ) into the stored proc , then control flow is halted in the C program and we can step into the stored proc called to whatever level of nesting and insspecting variables etc ( like any decent C debugged provides )? Is there ANY way to do the above ? Through some third party tool or the like if not through plain MySql . Help is appreciated. thanks

    Read the article

  • Leave approval hierarchy in openERP

    - by Miraj Baldha
    I am new to openERP and facing issues with HR module. I have this structure Project Manager-Team Leader-Developer Team Leader is a manager of developer Project Manager is manager of Team Leader. So, if developer asks for leave then first leave request should be sent to Team Leader (mail notification to Team Leader and Project Manager) and once TL approves Leave then automaitically request sent to Project Manager for second level approval. With openERP 6.1, there is no possibility to approve leave by Team Leader unless and until Team Leader is specified as a HR manager which is inappropriate. Anybody have any solution then let me know. Thanx..

    Read the article

  • Debugging stack data not assigned to a named variable

    - by gibbss
    Is there a way to view stack elements like un-assigned return values or exceptions that not assigned to a local variable? (e.g. throw new ...) For example, suppose I have code along the lines of: public String foo(InputStream in) throws IOException { NastyObj obj = null; try { obj = new NastyObj(in); return (obj.read()); } finally { if (obj != null) obj.close(); } } Is there any way to view the return or exception value without stepping to a higher level frame where it is assigned? This is particularly relevant with exceptions because you often have to step back up through a number of frames to find an actual handler. I usually use the Eclipse debugging environment, but any answer is appreciated. Also, if this cannot be done, can you explain why? (JVM, JPDA limitation?)

    Read the article

  • In what language was MSDOS originally written?

    - by nebukadnezzar
    In what language was MSDOS originally written in? The Wikipedia Article implies either C, QBasic or Pascal, but: C was invented to write UNIX, so I don't believe it was used to write MSDOS Pascal seems popular to teach programming, but not really popular to write Operating systems in QBasic didn't seem to be very popular for Operating Systems at the time MSDOS was developed (or was *BASIC ever very popular to write Operating Systems in it?) Except these three languages there is also Assembly, but I assume that Microsoft already switched from Assembly to a "higher" level language? Since C was originally invented for UNIX, I still wouldn't think Microsoft is using C... although the Microsoft API is written in C (I find this kind-of oxymoronic, actually). Can anyone enlighten me on this topic?

    Read the article

  • Viewing Crystal Reports other than through custom developed webform or winform apps

    - by Andrew
    At work we currently have a custom in-house built winforms app for the business users to view reports. It has role-based security and several administrator functions. My boss is thinking about getting me to port this app to webforms. My question is, are there options other than custom built winforms and webforms apps for deploying/viewing/administrating Crystal Reports at an enterprise level (role-based security, easy report deployment, etc)? I'm thinking about third-party packages or perhaps applications provided by Microsoft/Business Objects/SAP? We are using Crystal Reports 11.5.

    Read the article

  • what is a performance way to 'tree-walking' through my Entity Framework data

    - by Greg
    Hi, I have a Entity Framework design with a few tables that define a "graph". So there can be a large chain of relationships between objects in the few tables via concept of parent/child relationships. What is a performance way to 'tree-walking' through my Entity Framework data? That is I assume I wouldn't want to load the full set of all NODES and RELATIONSHIPS from the database for the purpose of walking the tree, where the end result may only be identifying leaf nodes? Or would this be OK with the way lazy loading may work at the column/parameter level? Else how could I load just the skeleton of the objects and then when needing to refer to any attributes have them lazy load then?

    Read the article

  • Does anyone know any good MATLAB code for rumor routing?

    - by Shruti Rattan
    I am looking for a MATLAB code that works for rumor routing. In rumor routing, some N nodes are generated first and randomly one of the nodes generates an 'Agent'. Agent carries the information where it is comming from and what information (like temperature, humidity,etc) is it looking for and what all nodes has it traversed through (basically the path to where it originated). Also another agent is generated by some other node that has some information to share (like temperature or humidity level of an area) to any other node looking for it. Now if the information seeker agent (former) path intersects the path followed by information giving agent (later) and if the information happens to be the same, then the path is made and used for the same information exchange. But there is another problem. The path has to be shortest path available between them depending upon how many intermediate nodes needed to be passed to reach destination node. Now I know its a lot of work but even a little help will be appreciated. Thanks guys

    Read the article

  • Form to sort an index in rails

    - by shmichael
    I'm a newcomer to Rails. I want to build a simple form that determines the sort order of a list. I've implemented a form in the likes of - <%= radio_button_tag :sort, "rating" %> <%= label_tag :sort_rating, "order by rating" %> <%= radio_button_tag :sort, "name" %> <%= label_tag :sort_name, "order by name" %> And now I am unsure how to implement the sort at the controller/model level. The aspects I am puzzled about are: Where should the sort be performed How could the sort parameter be persisted How could the code be reused Right now, I can't even get the selected sort method to remain selected after a submit. I would most appreciate any guidance or reference to an example.

    Read the article

  • mediawiki markup equivalent of WMD live-previewing editor? (not WYSIWYG)

    - by Justin Grant
    Anyone have a recommendation for an editor like the WMD editor, but using MediaWiki markup instead of Markdown? Our site is already using MediaWiki markup but we want a slicker editor without changing markup completely. Requirements include: live preview of formatted text underneath the markup you're typing a toolbar for common formatting (bold, italic, links, bullets, numbered-list, code, etc) keyboard shortcuts for each toolbar button (e.g. CTRL+B for bold) Undo/redo via keyboard shortcuts (CTRL+Z/CTRL+Y) or toolbar buttons works well in the usual set of popular browsers (including IE6!) open-source would be preferred I've found a few options at http://www.mediawiki.org/wiki/WYSIWYG_editor, but all of these seem to be WYSIWYG editors which is not exactly what I want since full-on WYSIWYG editors tend to be bug-prone and complicate working at the markup level. Instead we want a plain-text markup editor with a client-side previewer, plus some UI niceties (toolbar, undo, keyboard shortcuts) to make editing markup easier.

    Read the article

  • What is the simplest method to fill the area under a geom_freqpoly line?

    - by mattrepl
    The x-axis is time broken up into time intervals. There is an interval column in the data frame that specifies the time for each row. The column is a factor, where each interval is a different factor level. Plotting a histogram or line using geom_histogram and geom_freqpoly works great, but I'd like to have a line, like that provided by geom_freqpoly, with the area filled. Currently I'm using geom_freqpoly like this: ggplot(quake.data, aes(interval, fill=tweet.type)) + geom_freqpoly(aes(group = tweet.type, colour = tweet.type)) + opts(axis.text.x=theme_text(angle=-60, hjust=0, size = 6)) I would prefer to have a filled area, such as provided by geom_density, but without smoothing the line: UPDATE: The geom_area has been suggested, is there any way to use a ggplot2-generated statistic, such as ..count.., for the geom_area's y-values? Or, does the count aggregation need to occur prior to using ggplot2?

    Read the article

  • Put logic behind generated LinqToSql fields

    - by boris callens
    In a database I use throughout several projects, there is a field that should actually be a boolean but is for reasons nobody can explain to me a field duplicated over two tables where one time it is a char ('Y'/'N') and one time an int (1/0). When I generate a datacontext with LinqToSql the fields off course gets these datatypes. It would be nice if I don't have to drag this stupid choice of datatype throughout the rest of my application. Is there a way to give the generated classes a little bit of logic that just return me return this.equals('Y'); and return this==1; Preferably without having to make an EXTRA field in my partial class. It would be a solution to give the generated field a totally different name that can only be accessed through the partial class and then generate the extra field with the original name with my custom logic in the partial class. I don't know how to alter the accesibility level in my generated class though.. Any suggestions?

    Read the article

  • How do I use master page container in partial view

    - by user200295
    I have several partial views with Javascript that I am trying to move to the bottom of the page. To do this I am trying to use a container in the master page Master Page - <asp:ContentPlaceHolder ID="Foot" runat="server"></asp:ContentPlaceHolder> Partial view(ascx) <asp:Content ID="header" ContentPlaceHolderID="head" runat="server"> ... </asp:Content> But I get this error Parser Error Message: Content controls have to be top-level controls in a content page or a nested master page that references a master page. So how do I ensure that the Javascript for the partial view is at the bottom of the page? Especially in cases where the html layout needs to be at the top of the page?

    Read the article

  • Expression Studio 4 Premium & SketchFlow question; WTH

    - by Refracted Paladin
    Through work I have an Visual Studio Premium with MSDN subscription that I love. However, my biggest disappointment of the last 12 months was discovering that our 2nd from the top level subscription was not enough to get me Sketchflow! This is, most decidedly, NOT SHINY, and I am borderline distraught! What are my options? Upgrading to an Ultimate subscription for Sketchflow is out of the question. Am I forced, then, to stay with Blend 3 or Purchase Blend 4 seperately? If this is not a question I should ask here please inform and I'll delete. I just tend to default to SO for all questions that Google can't answer and Google did not answer this one.

    Read the article

  • Are there ways to improve NHibernate's performance regarding entity instantiation?

    - by denny_ch
    Hi folks, while profiling NHibernate with NHProf I noticed that a lot of time is spend for entity building or at least spend outside the query duration (database roundtrip). The project I'm currently working on prefetches some static data (which goes into the 2nd level cache) at application start. There are about 3000 rows in the result set (and maybe 30 columns) that is queried in 75 ms. The overall duration observed by NHProf is about 13 SECONDS! Is this typical beheviour? I know that NHibernate shouldn't be used for bulk operations, but I didn't thought that entity instantiation would be so expensive. Are there ways to improve performance in such situations or do I have to live with it? Thx, denny_ch

    Read the article

  • How I start a process to run logcat on Android?

    - by tangjie
    I want to read Android system level log file.So I use the following code: Process mLogcatProc = null; BufferedReader reader = null; try { mLogcatProc = Runtime.getRuntime().exec( new String[] { "logcat", "-d", "AndroidRuntime:E [Your Log Tag Here]:V *:S" }); reader = new BufferedReader(new InputStreamReader(mLogcatProc .getInputStream())); String line; final StringBuilder log = new StringBuilder(); String separator = System.getProperty("line.separator"); while ((line = reader.readLine()) != null) { log.append(line); log.append(separator); } } catch (IOException e) {} finally { if (reader != null) try { reader.close(); } catch (IOException e) {} } I also used in AndroidManifest.xml. But I can't read any line. The StringBuilder log is empty. And the method mLogcatProc.waitFor return 0. So how can I read the log ?

    Read the article

  • Starting new transaction within existing one in Spring bean

    - by Marcus
    We have: @Transactional(propagation = Propagation.REQUIRED) public class MyClass implementes MyInterface { ... MyInterface has a single method: go(). When go() executes we start a new transaction which commits/rollbacks when the method is complete - this is fine. Now let's say in go() we call a private method in MyClass that has @Transactional(propagation = Propagation.REQUIRES_NEW. It seems that Spring "ignores" the REQUIRES_NEW annotation and does not start a new transaction. I believe this is because Spring AOP operates on the interface level (MyInterface) and does not intercept any calls to MyClass methods. Is this correct? Is there any way to start a new transaction within the go() transaction? Is the only way to call another Spring managed bean that has transactions configured as REQUIRES_NEW?

    Read the article

  • Tracing\profiling instructions

    - by LeChuck2k
    Hi Y'all. I'd like to statistically profile my C code at the instruction level. I need to know how many additions, multiplications, devides, etc,... I'm performing. This is not your usual run of the mill code profiling requirement. I'm an algorithm developer and I want to estimate the cost of converting my code to hardware implementations. For this, I'm being asked the instruction call breakdown during run-time (parsing the compiled assembly isn't sufficient as it doesn't consider loops in the code). After looking around, It seems VMWare may offer a possible solution, but I still couldn't find the specific feature that will allow me to trace the instruction call stream of my process. Are you aware of any profiling tools which enable this?

    Read the article

  • How to tune system settings for mongoDB on Linux?

    - by jsh
    Trying to squeeze a lot out of one question here -- please bear with me. Although the MongoDB man pages make several useful recommendations about system settings like ulimit (http://docs.mongodb.org/manual/reference/ulimit/), and other production factors (http://docs.mongodb.org/manual/administration/production-notes/) they seem mysteriously silent on things like virtual memory and swap settings. The closest we get to a hint is that "...the operating system’s virtual memory subsystem manages MongoDB’s memory..." (http://docs.mongodb.org/manual/faq/fundamentals/#does-mongodb-require-a-lot-of-ram). Running the same job - high writes and high reads on about 10,000,000 records in a single collection -- on my 4-processor, 4GB RAM macbook and an 8-core ubuntu box with 64GB RAM I saw dramatically WORSE read performance on the linux box with factory settings, and could hear the disk constantly spinning, indicating high I/O and presumably swapping. Yes, other things were happening on the box, but there was plenty of free RAM, disk space, etc.; furthermore, I did not see evidence that Mongo was expanding to take advantage of all that free RAM as it is touted to do. Linux box default settings were as follows: vm.swappiness =60 vm.dirty_background_ratio = 10 vm.dirty_ratio = 20 vm.dirty_expire_centisecs =3000 vm.dirty_writeback_centisecs=500 I hazarded some guesses looking at docs and blogs for other types of databases (Oracle, MYSQL, etc.), experimented, and adjusted as below. vm.swappiness=10 vm.dirty_background_ratio=5 vm.dirty_ratio=5 vm.dirty_writeback_centisecs=250 vm.dirty_expire_centisecs=500 I saw some immediate apparent improvements in read time. However, when I ran my test jobs again, read performance continued to be painfully sluggish during heavy writes. Then, I REBUILT the collection from an available data source - and suddenly I can read at 1ms or less per record WHILE doing the write job! So the question is really two-fold: 1) What are appropriate VM settings for MongoDB on Linux? 2) (bonus) Does Mongo do some checking or optimization with the OS while data is being built? In other words, if I have built a large data set with suboptimal VM or I/O settings, does Mongo make assumptions during the memory-mapping process that will fail to take advantage of optimizations down the road? Obviously I don't fully grok memory mapping under the hood (I was hoping I wouldn't have to). Any help appreciated...thanks! -j

    Read the article

  • Languages/Technologies advice

    - by BL
    Hi all, a bit of advice required here :). I recently graduated(Computer Science), and need to decide a path to take programming/technology wise. I have knowledge of Java, C, SQL most of it is university level stuff. I work daily with PHP/SQL building web apps. Which language / technology would you advise me to learn. I am very interested in Database management, GIS etc. Web dev is also very interesting to me. It is all a bit confusing since i would like to learn something that will have a value at least in the near future. I would like to have some ideas on which language/technology is god choice in order to be marketable.

    Read the article

  • Memory Warning but Small Live Bytes

    - by Kamchatka
    Hi everyone, In my application, I get a memory warning of level 1 and then 2 after repeating some action (choosing a picture + processing) several times and then a crash. The leak tool doesn't show any leak. I'm also following the Allocations tool in Instruments and my Live Bytes are roughly 4 MB, overall I allocate 113 MB. At maximum I have maybe 20 MB in memory when the picture is loaded. Since I have to repeat an action to get to the crash, it is very likely to be a memory leak. However, I don't know how to locate it since my live bytes are 4 MB and things supposed to be allocated (apart a small leak of ~100 KB in the UIImagePickerController). How much can I trust the memory leak/allocation tools? Would you have an advice to help me locate the reason of the problem?

    Read the article

  • How does Twitter for iPhone bookmarklet work?

    - by Igor Zevaka
    Twitter client (formerly Tweetie) allows you to define a bookmarklet in Safari that launches the app. I want to know which iPhone API allows you to register the protocol specifier (or whatever it's called) - in this case "tweetie:" - in order for this bookmarklet to work. The instructions can be found here and the bookmarklet itself is below. javascript:window.location='tweetie:'+window.location Clicking the above bookmark is the same as typing in "tweetie:http://google.com" into the address bar. This is obviously supported on the OS/Browser level, much the same as tel: URIs. Am I correct in understanding that developers can add arbitrary URI protocol specifiers as a part of app installation?

    Read the article

< Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >