Search Results

Search found 8687 results on 348 pages for 'per fagrell'.

Page 211/348 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • logging in scala

    - by IttayD
    In Java, the standard idiom for logging is to create a static variable for a logger object and use that in the various methods. In Scala, it looks like the idiom is to create a Logging trait with a logger member and mixin the trait in concrete classes. This means that each time an object is created it calls the logging framework to get a logger and also the object is bigger due to the additional reference. Is there an alternative that allows the ease of use of "with Logging" while still using a per-class logger instance? EDIT: My question is not about how one can write a logging framework in Scala, but rather how to use an existing one (log4j) without incurring an overhead of performance (getting a reference for each instance) or code complexity. Also, yes, I want to use log4j, simply because I'll use 3rd party libraries written in Java that are likely to use log4j.

    Read the article

  • Should I really be expected to work more than 40 hours a week just because I am 'salaried developer'

    - by Ryan
    My boss says that I shouldn't be counting hours, but I am only getting paid for a full time job 40 hours per week. I don't get it. I could be using the rest of the hours in my day to run a small side business and get more income. However I have noticed other people just working and working whatever hours to hit the deadline. How is this fair? Of course the argument was 'if you worked more and increased your value then you will get more money'. A friend I regard as one of the smartest people I know (engineered his own sonar system for example) said that you should never work beyond what you are getting paid for. Thoughts?

    Read the article

  • MySQL: Storage of multiple text fields for a record

    - by Tom
    An inexperienced question: I need to store about 10 unknown-length text fields per record into a MySQL table. I expect no more than 50K rows in total for this table but speed is important. The database actions will be solely SELECTs for all practical purposes. I'm using InnoDB. In other words: id | text1 | text2 | text3 | .... | text10 As I understand that MySQL will store the text elsewhere and use its own indicators on the table itself, I'm wondering whether there's any fundamental performance implications that I should be worrying about given the way the data is stored? (i.e. several "sub-fetches" from the table). Thank you.

    Read the article

  • How do I throttle my site's API users?

    - by scotts
    The legitimate users of my site occasionally hammer the server with API requests that cause undesirable results. I want to institute a limit of no more than say one API call every 5 seconds or n calls per minute (haven't figured out the exact limit yet). I could obviously log every API call in a DB and do the calculation on every request to see if they're over the limit, but all this extra overhead on EVERY request would be defeating the purpose. What are other less resource-intensive methods I could use to institute a limit? I'm using PHP/Apache/Linux, for what it's worth.

    Read the article

  • Is it possible to partition more than one way at a time in SQL Server?

    - by meeting_overload
    I'm considering various ways to partition my data in SQL Server. One approach I'm looking at is to partition a particular huge table into 8 partitions, then within each of these partitions to partition on a different partition column. Is this even possible in SQL Server, or am I limited to definining one parition column+function+scheme per table? I'm interested in the more general answer, but this strategy is one I'm considering for Distributed Partitioned View, where I'd partition the data under the first scheme using DPV to distribute the huge amount of data over 8 machines, and then on each machine partition that portion of the full table on another parition key in order to be able to drop (for example) sub-paritions as required.

    Read the article

  • Disappearing Inlineshapes

    - by Rick
    I frequently edit files that have numerous inlineshapes, one per paragraph. One of the edits I make is to eliminate "double" paragraph marks either using Search and Replace or a simple macro that performs the same operation (e.g., search for "^p^p" and replace with "^p"). This operation works fine in Word 97-2003 documents (.doc), but when I try it on .docx documents, I lose all of the inlineshapes. The .docx files seem to ignore the inlineshape, therefore seeing the paragraphs containing them as "empty". Search and Replace sees "^p^p", replaces it with "^p", and the graphic goes away. Any thoughts on how to work around this?

    Read the article

  • Fastest way to calculate summary of database field

    - by Jo-wen
    I have a ms-sql table with the following structure: Name nvarchar; Sign nvarchar; Value int example contents: Test1, 'plus', 5 Test1, 'minus', 3 Test2, 'minus', 1 I would like to have totals per "Name". (add when sign = plus, subtract when sign = minus) result: Test1, 2 Test2, -1 I want to show these results (and update them when a new record is added)... and I'm looking for the fastest solution! [sproc? fast-forward cursor? calculate in .net?]

    Read the article

  • Specifying different page base classes for different directories in your web.config file

    - by gbinflames
    I have a website (coded in vb.net) with an "admin" section (the admin section being a folder in the actual site)... So in every page of the site I'm using my own custom class as the page base (which has been set in the web.config file) but I want to inherit from different class for the files in the "admin" folder... Is there any way to inherit different classes for pages in different folders using the web.config or some other equally as global method? (meaning some other way than inheriting on a per page basis.)

    Read the article

  • Access denied when using RunWithElevatedPrivileges?

    - by James123
    I want regular user can access the "User Information List" in Mysite root site. I am using "RunWithElevatedPrivileges" method. Still throwing access denied error. per example my root site collection for mysite is "http://network.test.com". the user want assess userinformation list this site collection. How can he access that? SPSecurity.RunWithElevatedPrivileges(delegate { using (SPSite site = new SPSite(SPContext.Current.Web.Site.ID)) { ServerContext sc = ServerContext.Current; UserProfileManager upm = new UserProfileManager(sc); UserProfile up = null; //get current user's profile (visitor) if (upm.UserExists(SPContext.Current.Web.CurrentUser.LoginName)) { up =upm.GetUserProfile(SPContext.Current.Web.CurrentUser.LoginName); SPWeb web = SPContext.Current.Web; SPList userInformationList = web.Lists["User Information List"];

    Read the article

  • tsql proc logic help

    - by bacis09
    I am weak in SQL and need some help working through some logic with my proc. Three pieces: store procedure, table1, table2 Table 1 stores most recent data for specific IDs Customer_id status_dte status_cde app_dte 001 2010-04-19 Y 2010-04-19 Table 2 stores history of data for specific customer IDs: For example: Log_id customer_Id status_dte status_cde 01 001 2010-04-20 N 02 001 2010-04-19 Y 03 001 2010-04-19 N 04 001 2010-04-19 Y The stored proecure currently throws an error if the status date from table1 is < than app_date in table1. If @status_dte < app_date Error Note: @status_dte is a variable stored as the status_dte from table1 However, I want it to throw an error when the EARLIEST status_dte from table 2 with a status_cde of 'Y' is less than the app_dte column in table 1. Keep in mind that this earliest date is not stored anywhere, the history of data changes per customer. Another customer might have the following history. Log_id customer_Id status_dte status_cde 01 002 2010-04-20 N 02 002 2010-04-18 N 03 002 2010-04-19 Y 04 002 2010-04-19 Y Any ideas on how I can approach this?

    Read the article

  • Performance optimization for SQL Server: decrease stored procedures execution time or unload the ser

    - by tim
    We have a web service which provides search over hotels. There is a problem with performance: a single request to the service takes around 5000 ms. Almost all of the time is spent in database by executing storing procedures. During the request our server (mssql2008) consumes ~90% of the processor time. When 2 requests are made in parallel the average time grows and is around 7000 ms. When number of request is increasing, the average time of response is increasing as well. We have 20-30 requests per minute. Which kind of optimization is the best in this case having in mind that the goal is to provide stable response time for the service: 1) Try to decrease the stored procedures execution time 2) Try to find the way how to unload the server It is interesting to hear from people who deal with booking sites.

    Read the article

  • performance monitoring tools for multi-tenant web application

    - by Anton
    We have a need to monitor performance of our java web app. We are looking for some tolls which can help us with this task. The major difficulty is that we are SaaS provider with multi-tenant server architecture with hundreds of customers running on the same hardware. So far we tried commercial products like DynaTrace and Coradinat but unfortunately they don't get the job done so far. What we need is a simple report which would tell us if we had performance problems on each customer site in a specified period of time. Mostly it will be response time per customer but also we will need some more specifics based on the URLs. please let me know if someone had any experience with setting up such monitoring. Thanks!

    Read the article

  • Hidden Features and Dark Corners of STL?

    - by Andrei
    C++ developers, all know the basics of C++: Declarations, conditionals, loops, operators, etc. Some of us even mastered the stuff like templates, object model, complex I/O, etc. But what are the most hidden features or tricks or dark corners of C++/STL that even C++ fans, addicts, and experts barely know? I am talking about a seasoned C++ programmer (be she/he a developer, student, fan, all three, etc), who thinks (s)he knows something 99% of us never heard or dreamed about. Something that not only makes his/her work easier, but also cool and hackish. After all, C++ is one of the most used programming languages in the world, thus it should have intricacies that only a few privileged know about and want to share with us. Boost is welcome too! One per post with an example please P.S Examples are important for other developers to copy and paste!

    Read the article

  • Are concurrency issues possible when using the WCF Service Behavoir attribute set to ConcurrencyMode

    - by Brandon Linton
    We have a WCF service that makes a good deal of transactional NHibernate calls. Occasionally we were seeing SQL timeouts, even though the calls were updating different rows and the tables were set to row level locking. After digging into the logs, it looks like different threads were entering the same point in the code (our transaction using block), and an update was hanging on commit. It didn't make sense, though, because we believed that the following service class attribute was forcing a unique execution thread per service call: [ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple, InstanceContextMode = InstanceContextMode.PerCall)] We recently changed the concurrency mode to ConcurrencyMode.Single and haven't yet run into any issues, but the bug was very difficult to reproduce (if anyone has any thoughts on flushing a bug like that out, let me know!). Anyway, that all brings me to my question: shouldn't an InstanceContextMode of PerCall enforce thread-safety within the service, even if the ConcurrencyMode is set to multiple? How would it be possible for two calls to be serviced by the same service instance? Thanks!

    Read the article

  • Where does complexity bloat from?

    - by AareP
    Many of our design decisions are based on our gut feeling about how to avoid complexity and bloating. Some of our complexity-fears are true, we have plenty of painful experience on throwing away deprecated code. Other times we learn that some particular task isn't really that complex as we though it to be. We notice for example that upkeeping 3000 lines of code in one file isn't that difficult... or that using special purpose "dirty flags" isn't really bad OO practice... or that in some cases it's more convenient to have 50 variables in one class that have 5 different classes with shared responsibilities... One friend has even stated that adding functions to the program isn't really adding complexity to your system. So, what do you think, where does bloated complexity creep from? Is it variable count, function count, code line count, code line count per function, or something else?

    Read the article

  • Using Entity Framework, how do I specify a sort on a navagation property?

    - by Jared
    I have two tables: [Category], [Item]. They are connected by a join table: [CategoryAndItem]. It has two primary key fields: [CategoryKey], [ItemKey]. Foreign keys exist appropriately and Entity has no problem pulling this in and creating the correct navigation properties that connect the entity objects. Basically each category can have multiple items, and items can be in multiple categories. The problem is that the order of items is specified per category, so that a particular item might be third in one category but fifth in another. In the past, I have added a [Sequence] field to the join table and modified the stored procedure to handle it. But since Entity is replacing my stored procedures, I need to figure out how to make Entity handle the sequence. Any suggestions?

    Read the article

  • Wordpress Taxonomy

    - by ninusik
    I am creating a Wordpress blog (no live link yet because it's still at a planning stage). I want to set up the following tag structure: Category 1: Services Tags: Web design, logo design, print design, etc etc. Category 2: Type of clients Tags: small businesses, large companies So each post will be tagged with one or more tags from Category 1, and one tag from Category 2. However, I heard that using more than one Category per post is a bad idea in terms of SEO? But then, how can I go about it? I don't want any SEO penalties, but I will need to somehow categorize each post using these 2 distinct categories. Should I create some custom taxonomies? That seems like an overkill to me. The solution is probably something rather simple but it just escapes me. I'm not very experienced with taxonomies so I'll appreciate any suggestions.

    Read the article

  • Rails Asset Caching Breaks First few page loads

    - by Brian Armstrong
    We're using Rails asset caching for JS and CSS like this: <%= javascript_include_tag :defaults, 'autocomplete', 'searchbox', 'jqmodal', :cache => set_asset_cache(:admins) %> In our deploy we call rake tmp:assets:clear each time. The problem is that the first few page loads after a deploy come up with no css on the page. I guess until the cached all.js and all.css have been regenerated. We deploy many times per day and this is scary for any users who happen to come across a busted page. Have people found any way to make this smoother so the new cached assets are guaranteed to be there on the first new page load?

    Read the article

  • animate rotation

    - by Mike
    I'm trying to animate a rotation using CATransform3DMakeRotation, but the problem is once the animation is finished, the image goes back to its initial position, i.e back to zero. But I'd like to keep it where it finished rotating. How would I do that? edit What I'm trying to do is to create the same compass which comes with the new iPhone. Basically the locationmanager gives me new headings every few seconds (or several per second). Using the new heading and the timestamp, I was trying to get a smooth animation of the image but not getting anywhere. The only thing which seems to work is applying the transform directly, e.g. compassimage.layer.transform = CATransform3DMakeRotation(newHeading.trueHeading *M_PI/180,0,0,1.0): but that's not animated...

    Read the article

  • What are the lesser known but cool data structures ?

    - by f3lix
    There a some data structures around that are really cool but are unknown to most programmers. Which are they? Everybody knows linked lists, binary trees, and hashes, but what about Skip lists, Bloom filters for example. I would like to know more data structures that are not so common, but are worth knowing because they rely on great ideas and enrich a programmer's tool box. PS: I am also interested on techniques like Dancing links which make interesting use of the properties of a common data structure. EDIT: Please try to include links to pages describing the data structures in more detail. Also, try to add a couple of words on why a data structures is cool (as Jonas Kölker already pointed out). Also, try to provide one data-structure per answer. This will allow the better data structures to float to the top based on their votes alone.

    Read the article

  • Debugging of native code

    - by graham.reeds
    I have a C# Service that is calling a C DLL that was originally written in VC6. There is a bug in the DLL which I am trying to inspect. After having a nightmare trying to get debug to work I eventually added the dll to the VS2005 solution containing the C# Service and added the necessary _CRT_SECURE_NO_WARNINGS. The debug version of the service is registered using 'installutil.exe' tool. I can get the debugger to break just before the line where the dll is entered via a call to System.Diagnostics.Debugger.Break();. I found some instruction on the net regarding stepping into debugging unmanaged code, and enabled the 'Enable unmanaged code debugging' check box, I've also tried turning on the Options-Debugging-Native 'Load DLL exports' and 'Enable RPC Debugging' (even though it's not COM). I've also copied the debug dll and .pdb to the same bin directory as the However the unmanaged code is not being stepped into which is what I really need. UPDATE: I found the Debugging Type in the DLL properties and set it to 'Mixed' as per suggestion on several sites but to no avail.

    Read the article

  • MVC ASP.Net how do I add a second model to a view? i.e. Add a dropdown list to a page with a html grid?

    - by John S
    I have been able to find lots of examples of adding a Dropdown list to a view but I need to add a dropdown list to a view that also has a Webgrid on it. This entails two different models and from what I see I can only have one per view. The DDL will be filled from one model and the grid from the other. I'm just trying to filter the displayed data in the grid with the data selected in the ddl. Any examples or articles would be greatly appreciated. TIA

    Read the article

  • Counting the most tagged tag with MySQL

    - by Jack W-H
    Hi folks My problem is that I'm trying to count which tag has been used most in a table of user-submitted code. But the problem is with the database structure. The current query I'm using is this: SELECT tag1, COUNT(tag1) AS counttag FROM code GROUP BY tag1 ORDER BY counttag DESC LIMIT 1 This is fine, except, it only counts the most often occurence of tag1 - and my database has 5 tags per post - so there's columns tag1, tag2, tag3, tag4, tag5. How do I get the highest occurring tag value from all 5 columns in one query? Jack

    Read the article

  • How do I fix "error 1004, 0, Unable to find property" in an Entity Framework 4 WinForms application?

    - by Ivan
    I've designed an EF4 model (quite complex inheritance, lots of small tables incl. multiple self-referencing), generated (table-per-type) a database and inserted some basic data manually. It works fine in an ASP.Net Dynamic Data Entities web application with full automatic scaffolding. But when in a WinForms application using the same model (I share it as a part of a class library) I construct a query and bind a combo box to it (the way it's shown here), I get an InnerException {"Internal .NET Framework Data Provider error 1004, 0, Unable to find property... I've found a question about the same problem here (incl. a sample to reproduce the error) but no answer. I use final Visual Studio 2010, no beta.

    Read the article

  • Fastest way to compress a database or .bak file and transfer it

    - by Nai
    As per the question title. I wonder if there are special programmes or commands that makes zipping up a .bak file and transferring it super quick. I read abour xp_cmdshell here but I'm not sure about the speed. My .bak file is about 12 gigs at the moment. Related to this is the possibility of using Red Gate's SQL Data Compare to just transfer the differential data across the network pipeline but I have never used SQL Data Compare before and I'm not sure how it goes about doing INSERTS on tables with Primary Keys and such. Also, not sure about the speed. Does anyone have any experience with this programme or similar programmes? Cheers!

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >