Search Results

Search found 4278 results on 172 pages for 'capacity planning'.

Page 9/172 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Why do ticket websites crash?

    - by Soloman Smart
    I hope this fits into the expanse of serverfault. Apologies if it doesn't. Why do ticket websites selling tickets for major concerts/events still crash when they make the tickets available? Surely, they know there is going to be huge demand and can ensure they have capacity to deal with that? May seem like a very simple question so sorry for those who understand! Thanks!

    Read the article

  • How to get the correct battery status?

    - by GUI Junkie
    At this moment, ever since I installed Ubuntu on this machine, the battery status says: not present. Looking at this answer, however, I find that /proc/acpi/battery/BAT1/info (sometimes its /proc/acpi/battery/BAT0/info, use tab complete to help) has the following info: present: yes design capacity: 4400 mAh last full capacity: 4400 mAh battery technology: rechargeable design voltage: 11100 mV design capacity warning: 300 mAh design capacity low: 132 mAh cycle count: 0 capacity granularity 1: 32 mAh capacity granularity 2: 32 mAh model number: BAT1 serial number: 11 battery type: 11 OEM info: 11 In accordance to this answer, I've checked the /proc/acpi/battery/BAT1/state file: present: yes capacity state: ok charging state: charged present rate: unknown remaining capacity: unknown present voltage: 10000 mV The acpi -b command returns: Battery 0: Unknown, 0%, rate information unavailable Any suggestions on getting the battery info updated?

    Read the article

  • Data Source Connection Pool Sizing

    - by Steve Felts
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} One of the most time-consuming procedures of a database application is establishing a connection. The connection pooling of the data source can be used to minimize this overhead.  That argues for using the data source instead of accessing the database driver directly. Configuring the size of the pool in the data source is somewhere between an art and science – this article will try to move it closer to science.  From the beginning, WLS data source has had an initial capacity and a maximum capacity configuration values.  When the system starts up and when it shrinks, initial capacity is used.  The pool can grow to maximum capacity.  Customers found that they might want to set the initial capacity to 0 (more on that later) but didn’t want the pool to shrink to 0.  In WLS 10.3.6, we added minimum capacity to specify the lower limit to which a pool will shrink.  If minimum capacity is not set, it defaults to the initial capacity for upward compatibility.   We also did some work on the shrinking in release 10.3.4 to reduce thrashing; the algorithm that used to shrink to the maximum of the currently used connections or the initial capacity (basically the unused connections were all released) was changed to shrink by half of the unused connections. The simple approach to sizing the pool is to set the initial/minimum capacity to the maximum capacity.  Doing this creates all connections at startup, avoiding creating connections on demand and the pool is stable.  However, there are a number of reasons not to take this simple approach. When WLS is booted, the deployment of the data source includes synchronously creating the connections.  The more connections that are configured in initial capacity, the longer the boot time for WLS (there have been several projects for parallel boot in WLS but none that are available).  Related to creating a lot of connections at boot time is the problem of logon storms (the database gets too much work at one time).   WLS has a solution for that by setting the login delay seconds on the pool but that also increases the boot time. There are a number of cases where it is desirable to set the initial capacity to 0.  By doing that, the overhead of creating connections is deferred out of the boot and the database doesn’t need to be available.  An application may not want WLS to automatically connect to the database until it is actually needed, such as for some code/warm failover configurations. There are a number of cases where minimum capacity should be less than maximum capacity.  Connections are generally expensive to keep around.  They cause state to be kept on both the client and the server, and the state on the backend may be heavy (for example, a process).  Depending on the vendor, connection usage may cost money.  If work load is not constant, then database connections can be freed up by shrinking the pool when connections are not in use.  When using Active GridLink, connections can be created as needed according to runtime load balancing (RLB) percentages instead of by connection load balancing (CLB) during data source deployment. Shrinking is an effective technique for clearing the pool when connections are not in use.  In addition to the obvious reason that there times where the workload is lighter,  there are some configurations where the database and/or firewall conspire to make long-unused or too-old connections no longer viable.  There are also some data source features where the connection has state and cannot be used again unless the state matches the request.  Examples of this are identity based pooling where the connection has a particular owner and XA affinity where the connection is associated with a particular RAC node.  At this point, WLS does not re-purpose (discard/replace) connections and shrinking is a way to get rid of the unused existing connection and get a new one with the correct state when needed. So far, the discussion has focused on the relationship of initial, minimum, and maximum capacity.  Computing the maximum size requires some knowledge about the application and the current number of simultaneously active users, web sessions, batch programs, or whatever access patterns are common.  The applications should be written to only reserve and close connections as needed but multiple statements, if needed, should be done in one reservation (don’t get/close more often than necessary).  This means that the size of the pool is likely to be significantly smaller then the number of users.   If possible, you can pick a size and see how it performs under simulated or real load.  There is a high-water mark statistic (ActiveConnectionsHighCount) that tracks the maximum connections concurrently used.  In general, you want the size to be big enough so that you never run out of connections but no bigger.   It will need to deal with spikes in usage, which is where shrinking after the spike is important.  Of course, the database capacity also has a big influence on the decision since it’s important not to overload the database machine.  Planning also needs to happen if you are running in a Multi-Data Source or Active GridLink configuration and expect that the remaining nodes will take over the connections when one of the nodes in the cluster goes down.  For XA affinity, additional headroom is also recommended.  In summary, setting initial and maximum capacity to be the same may be simple but there are many other factors that may be important in making the decision about sizing.

    Read the article

  • Finishing an iteration early

    - by f1dave
    I'd like some input on this on those working with agile methodologies... A current project is finding that development on our planned user stories is finishing some time before the end of the iteration, and that the testing effort and business acceptance is what's actually dragging us out longer towards the end. This means that the devs in question have spare time, and they're essentially going out to the iteration+1 backlog and starting work on cards there before our current iteration cards are 'done'. As iteration manager, I want to put a stop to this - I want a more team-orientated approach where the group takes ownership of getting all the cards done, as opposed to "Well, dev's done so what do I dev next?" The problem I face is convincing the team of this. On one hand, I understand why the devs don't want to test the code they've written (there are unit tests they write of course, but the manual testing to be done could be influenced by their bias). The team sees working ahead as making our next iterations easier, because a lot of the work is done before we start. I see this as screwing with the whole system of planning/actuals - but it's difficult to convince the team as to why this matters. What advice can you guys and girls give? How do we stop devs reaching ahead? What should they be doing instead? How much of a problem is this in the scheme of things, if things are still getting done?

    Read the article

  • How should I create a mutable, varied jtree with arbitrary/generic category nodes?

    - by Pureferret
    Please note: I don't want coding help here, I'm on Programmers for a reason. I want to improve my program planning/writing skills not (just) my understanding of Java. I'm trying to figure out how to make a tree which has an arbitrary category system, based on the skills listed for this LARP game here. My previous attempt had a bool for whether a skill was also a category. Trying to code around that was messy. Drawing out my tree I noticed that only my 'leaves' were skills and I'd labeled the others as categories. Explanation of tree: The Tree is 'born' with a set of hard coded highest level categories (Weapons, Physical and Mental, Medical etc.). Fro mthis the user needs to be able to add a skill. Ultimately they want to add 'One-handed Sword Specialisation' for instance. To do so you'd ideally click 'add' with Weapons selected and then select One-handed from a combobox, then click add again and enter a name in a text field. Then click add again to add a 'level' or 'tier' first proficiency, then specialisation. Of course if you want to buy a different skill it's completely different, which is what I'm having trouble getting my head around let alone programming in. What is a good system for describing this sort of tree in code? All the other JTree examples I've seen have some predictable pattern, and I don't want to have to code this all in 'literals'. Should I be using abstract classes? Interfaces? How can I make this sort of cluster of objects extensible when I add in other skills not listed above that behave differently? If there is not a good system to use, if there a good process for working out how to do this sort of thing?

    Read the article

  • When you’re on a high, start something big

    - by BuckWoody
    Most days are pretty average – we have some highs, some lows, and just regular old work to do. But some days the sun is shining, your co-workers are especially nice, and everything just falls into place. You really *enjoy* what you do. Don’t let that moment pass. All of us have “big” projects that we need to tackle. Things that are going to take a long time, and a lot of money. Those kinds of data projects take a LOT of planning, and many times we put that off just to get to the day’s work. I’ve found that the “high” moments are the perfect time to take on these big projects. I’m more focused, and more importantly, more positive. And as the quote goes, “whether you think you can or you think you can’t, you’re probably right.” You’ll find a way to make it happen if you’re in a positive mood. Now – having those “great days” is actually something you can influence, but I’ll save that topic for a future post. I have a project to work on. :) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How can I justify a technology over another? (Java over .NET)

    - by user674887
    We are working in a Java/.NET company and my team and I are planning a project for a client. One of the requirements is that the project has to be done in .NET I've asked about this requirement, and the client said that it doesn't matter, and that if I have a good reason we can use other technology. But, I have to justify the decision. As a Project Manager / Analyst I'm interested in making the project in Java because: The team knows java much better, regarding the language and frameworks I don't know anything about .NET technology (and maybe we could make bad decisions thinking in a Java way to do things) There are other people in company that have more skills in .NET but they have other projects with more priority. For experience, I'm sure that if we use Java, the project will have much more quality. But this arguments could be weak from the client perspective. How can I justify making the project in Java? EDIT: I'm not asking if one technology is better than other. "It's not a technology war" question.

    Read the article

  • passing back answers in prolog

    - by AhmadAssaf
    i have this code than runs perfectly .. returns a true .. when tracing the values are ok .. but its not returning back the answer .. it acts strangely when it ends and always return empty list .. uninstantiated variable .. test :- extend(4,12,[4,3,1,2],[[1,5],[3,4],[6]],_ExtendedBins). %printing basic information about the extend(NumBins,Capacity,RemainingNumbers,BinsSoFar,_ExtendedBins) :- getNumberofBins(BinsSoFar,NumberOfBins), msort(RemainingNumbers,SortedRemaining),nl, format("Current Number of Bins is :~w\n",[NumberOfBins]), format("Allowed Capacity is :~w\n",[Capacity]), format("maximum limit in bin is :~w\n",[NumBins]), format("Trying to fit :~w\n\n",[SortedRemaining]), format("Possible Solutions :\n\n"), fitElements(NumBins,NumberOfBins, Capacity,SortedRemaining,BinsSoFar,[]). %this is were the creation for possibilities will start %will check first if the number of bins allowed is less than then %we create a new list with all the possible combinations %after that we start matching to other bins with capacity constraint fitElements(NumBins,NumberOfBins, Capacity,RemainingNumbers,Bins,ExtendedBins) :- ( NumberOfBins < NumBins -> print('Creating new set: '); print('Sorry, Cannot create New Sets')), createNewList(Capacity,RemainingNumbers,Bins,ExtendedBins). createNewList(Capacity,RemainingNumbers,Bins,ExtendedBins) :- createNewList(Capacity,RemainingNumbers,Bins,[],ExtendedBins), print(ExtendedBins). createNewList(0,Bins,Bins,ExtendedBins,ExtendedBins). createNewList(_,[],_,ExtendedBins,ExtendedBins). createNewList(Capacity,[Element|Rest],Bins,Temp,ExtendedBins) :- conjunct_to_list(Element,ListedElement), append(ListedElement,Temp,NewList), sumlist(NewList,Sum), (Sum =< Capacity, append(ListedElement,ExtendedBins,Result); Capacity = 0), createNewList(Capacity,Rest,Bins,NewList,Result). fit(0,[],ExtendedBins,ExtendedBins). fit(Capacity,[Element|Rest],Bin,ExtendedBins) :- conjunct_to_list(Element,Listed), append(Listed,Bin,NewBin), sumlist(NewBin,Sum), (Sum =< Capacity -> fit(Capacity,Rest,NewBin,ExtendedBins); Capacity = 0, append(NewBin,ExtendedBins,NewExtendedBins), print(NewExtendedBins), fit(0,[],NewBin,ExtendedBins)). %get the number of bins provided getNumberofBins(List,NumberOfBins) :- getNumberofBins(List,0,NumberOfBins). getNumberofBins([],NumberOfBins,NumberOfBins). getNumberofBins([_List|Rest],TempCount,NumberOfBins) :- NewCount is TempCount + 1, %calculate the count getNumberofBins(Rest,NewCount,NumberOfBins). %recursive call %Convert set of terms into a list - used when needed to append conjunct_to_list((A,B), L) :- !, conjunct_to_list(A, L0), conjunct_to_list(B, L1), append(L0, L1, L). conjunct_to_list(A, [A]). Greatly appreciate the help

    Read the article

  • Planning to create PDF files in Ruby on Rails

    - by deau
    Hi there, A Ruby on Rails app will have access to a number of images and fonts. The images are components of a visual layout which will be stored separately as a set of rules. The rules specify document dimensions along with which images are used and where. The app needs to take these rules, fetch the images, and generate a PDF that is ready for local printing or emailing. The fonts will also be important. The user needs to customize the layout by inputting text which will be included in the PDF. The PDF must therefore also contain the desired font so that the document renders identically across different machines. Each PDF may have many pages. Each page may have different dimensions but this is not essential. Either way, the ability to manipulate the dimensions and margins given by the PDF is essential. The only thing that needs to be regularly changed is the text. If this is takes too much development then the app can store the layouts in 3rd party PDFs and edit the textual content directly. Eventually though, this will prove too restrictive on the apps intended functionality so I would prefer the app to generate the PDF's itself. I have never worked with PDFs before and, for the most part, I've never had to output anything to the user outside their monitor. A printed medium could require a very different approach to get the best results. If anyone has any advice on how to model the PDF format this it would be really appreciated. The technical aspects of printing such as bleed, resolution and colour have already been factored in to the layouts and images. I am aware that PDF is a proprietary file format and I want to use free or open source software. I have seen a number of Ruby libraries for generating PDF files but because I am new on this scene I have no way to reliably compare them and too little time to implement and test them all. I also have the option of using C to handle this feature and if this is process intensive then that might be preferred. What should I be thinking about and how should I be planning to implement this?

    Read the article

  • Ti sei perso l'Oracle EPM Live Webcast sul Project Planning? Ora puoi rivederlo

    - by antonella.buonagurio
    Se non hai potuto seguire l'ultimo webcast EPM dedicato al Project Planning puoi rivederlo a questo link. Il webcast, che fa parte di una serie di live webseminar  dedicati ai professionisti dell'area amministrazione finanza e controllo, è focalizzato sul processo di budgeting, forecasting e controllo di gestione per commessa e delle attività a progetto in ambito economico-finanziario. Durante il webseminar viene presentata  l'integrazione funzionale di Oracle Hyperion EPM System con le altre soluzioni Oracle per il project planning & management. Non perdere l'ultimo appuntamento prima delle vacanze! 13 luglio Oracle EPM Live webcast:  Predictive Planning clicca qui per saperne di più!

    Read the article

  • Webinar: Riding the Fence or Planning the Upgrade to 11gR2?

    - by Greg Jensen
     Is your organization riding the Identity and Access fence where you can't decide if you are ready to upgrade?  Are you unsure what the technical and business value gains are, in upgrading to Oracle's 11gR2?  Or are you planning for the upgrade and just unsure of what to expect? In this webinar, experts from Oracle and AmerIndia will discuss the new features of 11gR2, latest market trends, and how IAM transforms organizations. In addition, planning and implementation strategy of the upgrade process will be discussed. The presenters will also share success stories and highlight challenges faced by organizations belonging to different verticals and how Oracle’s solutions and AmerIndia’s services addressed those challenges. Topics include: Market trends and 11gR2 Planning an upgrade Approach and Implementation Strategy Success stories Registration is now open for this Webinar for December 5th from 2pm - 3pm EST. https://blogs.oracle.com/OracleIDM/resource/amerindia-logo.png

    Read the article

  • It is worth planning before jumping in the code?

    - by Rushino
    I always thought that planning is important for a game. But i don't know at which point. Some are telling me to code instead of planning but i feel like its still important because when you will be in the code you will know what to do next more easily. I am currently working on a game that will have lots of content so i decided to start a design document introducing thoses content and at a side-level i am doing proofs of concept to check if it can be done. Parts of each proofs of concept then could be used later in the real game. EDIT: I am working alone on this project. So my question is : It is worth planning before jumping in the code ? Im still interested to know what others have to say about this. Cause i still get some poeple saying i should code instead of thinking.. so what your opinion on this ?

    Read the article

  • What disk setup is needed / best practice for hypervisor-only servers?

    - by Luke404
    Planning to buy some servers to run an hypervisor (Citrix XenServer or VMware vSphere, still have to decide between the two) we'd like to boot off the local redundant SD card module offered by various vendors (eg. Dell, HP, etc...). The actual VMs will run from an existing iSCSI SAN (which, by the way, can't support booting the servers directly off the SAN). What are the reasons, if any, to choose completely diskless servers VS having some local storage? And what would be the guidelines to choose that local storage? (number of spindles, raid level, etc)

    Read the article

  • 2d trajectory planning of a spaceship with physics.

    - by egarcia
    I'm implementing a 2D game with ships in space. In order to do it, I'm using LÖVE, which wraps Box2D with Lua. But I believe that my question can be answered by anyone with a greater understanding of physics than myself - so pseudo code is accepted as a response. My problem is that I don't know how to move my spaceships properly on a 2D physics-enabled world. More concretely: A ship of mass m is located at an initial position {x, y}. It has an initial velocity vector of {vx, vy} (can be {0,0}). The objective is a point in {xo,yo}. The ship has to reach the objective having a velocity of {vxo, vyo} (or near it), following the shortest trajectory. There's a function called update(dt) that is called frequently (i.e. 30 times per second). On this function, the ship can modify its position and trajectory, by applying "impulses" to itself. The magnitude of the impulses is binary: you can either apply it in a given direction, or not to apply it at all). In code, it looks like this: def Ship:update(dt) m = self:getMass() x,y = self:getPosition() vx,vy = self.getLinearVelocity() xo,yo = self:getTargetPosition() vxo,vyo = self:getTargetVelocity() thrust = self:getThrust() if(???) angle = ??? self:applyImpulse(math.sin(angle)*thrust, math.cos(angle)*thrust)) end end The first ??? is there to indicate that in some occasions (I guess) it would be better to "not to impulse" and leave the ship "drift". The second ??? part consists on how to calculate the impulse angle on a given dt. We are in space, so we can ignore things like air friction. Although it would be very nice, I'm not looking for someone to code this for me; I put the code there so my problem is clearly understood. What I need is an strategy - a way of attacking this. I know some basic physics, but I'm no expert. For example, does this problem have a name? That sort of thing. Thanks a lot.

    Read the article

  • Random Page Cost and Planning

    - by Dave Jarvis
    A query (see below) that extracts climate data from weather stations within a given radius of a city using the dates for which those weather stations actually have data. The query uses the table's only index, rather effectively: CREATE UNIQUE INDEX measurement_001_stc_idx ON climate.measurement_001 USING btree (station_id, taken, category_id); Reducing the server's configuration value for random_page_cost from 2.0 to 1.1 had a massive performance improvement for the given range (nearly an order of magnitude) because it suggested to PostgreSQL that it should use the index. While the results now return in 5 seconds (down from ~85 seconds), problematic lines remain. Bumping the query's end date by a single year causes a full table scan: sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1997-12-31'::date AND How do I persuade PostgreSQL to use the indexes regardless of years between the two dates? (A full table scan against 43 million rows is probably not the best plan.) Find the EXPLAIN ANALYSE results below the query. Thank you! Query SELECT extract(YEAR FROM m.taken) AS year, avg(m.amount) AS amount FROM climate.city c, climate.station s, climate.station_category sc, climate.measurement m WHERE c.id = 5182 AND earth_distance( ll_to_earth(c.latitude_decimal,c.longitude_decimal), ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= 30 AND s.elevation BETWEEN 0 AND 3000 AND s.applicable = TRUE AND sc.station_id = s.id AND sc.category_id = 1 AND sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1996-12-31'::date AND m.station_id = s.id AND m.taken BETWEEN sc.taken_start AND sc.taken_end AND m.category_id = sc.category_id GROUP BY extract(YEAR FROM m.taken) ORDER BY extract(YEAR FROM m.taken) 1900 to 1996: Index "Sort (cost=1348597.71..1348598.21 rows=200 width=12) (actual time=2268.929..2268.935 rows=92 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1348586.56..1348590.06 rows=200 width=12) (actual time=2268.829..2268.886 rows=92 loops=1)" " -> Nested Loop (cost=0.00..1344864.01 rows=744510 width=12) (actual time=0.807..2084.206 rows=134893 loops=1)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (sc.station_id = m.station_id))" " -> Nested Loop (cost=0.00..12755.07 rows=1220 width=18) (actual time=0.502..521.937 rows=23 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Nested Loop (cost=0.00..9907.73 rows=3659 width=34) (actual time=0.014..28.937 rows=3458 loops=1)" " -> Seq Scan on station_category sc (cost=0.00..970.20 rows=3659 width=14) (actual time=0.008..10.947 rows=3458 loops=1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1996-12-31'::date) AND (category_id = 1))" " -> Index Scan using station_pkey1 on station s (cost=0.00..2.43 rows=1 width=20) (actual time=0.004..0.004 rows=1 loops=3458)" " Index Cond: (s.id = sc.station_id)" " Filter: (s.applicable AND (s.elevation >= 0) AND (s.elevation <= 3000))" " -> Append (cost=0.00..1072.27 rows=947 width=18) (actual time=6.996..63.199 rows=5865 loops=23)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.000..0.000 rows=0 loops=23)" " Filter: (m.category_id = 1)" " -> Bitmap Heap Scan on measurement_001 m (cost=20.79..1047.27 rows=941 width=18) (actual time=6.995..62.390 rows=5865 loops=23)" " Recheck Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" " -> Bitmap Index Scan on measurement_001_stc_idx (cost=0.00..20.55 rows=941 width=0) (actual time=5.775..5.775 rows=5865 loops=23)" " Index Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" "Total runtime: 2269.264 ms" 1900 to 1997: Full Table Scan "Sort (cost=1370192.26..1370192.76 rows=200 width=12) (actual time=86165.797..86165.809 rows=94 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1370181.12..1370184.62 rows=200 width=12) (actual time=86165.654..86165.736 rows=94 loops=1)" " -> Hash Join (cost=4293.60..1366355.81 rows=765061 width=12) (actual time=534.786..85920.007 rows=139721 loops=1)" " Hash Cond: (m.station_id = sc.station_id)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end))" " -> Append (cost=0.00..867005.80 rows=43670150 width=18) (actual time=0.009..79202.329 rows=43670079 loops=1)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.001..0.001 rows=0 loops=1)" " Filter: (category_id = 1)" " -> Seq Scan on measurement_001 m (cost=0.00..866980.80 rows=43670144 width=18) (actual time=0.008..73312.008 rows=43670079 loops=1)" " Filter: (category_id = 1)" " -> Hash (cost=4277.93..4277.93 rows=1253 width=18) (actual time=534.704..534.704 rows=25 loops=1)" " -> Nested Loop (cost=847.87..4277.93 rows=1253 width=18) (actual time=415.837..534.682 rows=25 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.012..0.014 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Hash Join (cost=847.87..1352.07 rows=3760 width=34) (actual time=6.427..35.107 rows=3552 loops=1)" " Hash Cond: (s.id = sc.station_id)" " -> Seq Scan on station s (cost=0.00..367.25 rows=7948 width=20) (actual time=0.004..23.529 rows=7949 loops=1)" " Filter: (applicable AND (elevation >= 0) AND (elevation <= 3000))" " -> Hash (cost=800.87..800.87 rows=3760 width=14) (actual time=6.416..6.416 rows=3552 loops=1)" " -> Bitmap Heap Scan on station_category sc (cost=430.29..800.87 rows=3760 width=14) (actual time=2.316..5.353 rows=3552 loops=1)" " Recheck Cond: (category_id = 1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1997-12-31'::date))" " -> Bitmap Index Scan on station_category_station_category_idx (cost=0.00..429.35 rows=6376 width=0) (actual time=2.268..2.268 rows=6339 loops=1)" " Index Cond: (category_id = 1)" "Total runtime: 86165.936 ms"

    Read the article

  • Git repo planning questions

    - by masonk
    At work, development uses perforce to handle code sharing. I won't say "revision control", because we aren't allowed to check in changes until they are ready for regression testing. In order to get my personal change sets under revision control, I've been given the go-ahead to build my own git and initialize the client view of the perforce depot as a git repo. There are some difficulties in doing this, however. The client view lives in a subfolder of ~, (~/p4), and I want to put ~ under revision control as well, with its own separate history. I can't figure out how to keep the history for ~ separate from ~/p4 without using a submodule. The problem with a submodule is that it looks like I have to go make a repository that will become the submodule and then git submodule add <repo> <path>. But there is nowhere to make the submodule's repository except in ~. There seems to be no safe place to create the initial client view of the depot with git p4 clone. (I'm working off of the assumption that initing or cloning a repo into a subdirectory of a git repo is not supported. At least, I can find nothing authoritative on nested git repos.) edit: Is merely ignoring ~/p4 in the repo rooted at ~ enough to allow me to init a nested repo in ~/p4? My __git_ps1 function still thinks I'm in a git repository when I visit an ignored subdirectory of a git repo, so I'm inclined to think not. I need the "remote" repository created by git p4 sync to be a branch in ~/p4. We are required to keep all of our code in ~/p4 so that it doesn't get backed up. Can I pull from a "remote" branch that is really a local branch? This one is just for convenience, but I thought I could learn something by asking it. For 99% of the project, I just want to start the with the p4 head revision as the inital commit object. For the other 1%, I would like to suck down the entire p4 history so that I can browse it in git. IOW, after I'm done initalizing it, the initial commit of remotes/p4/master branch will contain: revision 1 of //depot/prod/Foo/Bar/* revision X of other files in //depot/prod/*, where X is the head revision and the remotes/p4/master branch contains Y commits, where Y is the number of changelists that had a file in //depot/prod/Foo/Bar/*, with each commit in the history corresponding to one of those p4 changelists, and HEAD looking like p4's head.

    Read the article

  • rails semi-complex STI with ancestry data model planning the routes and controllers

    - by ere
    I'm trying to figure out the best way to manage my controller(s) and models for a particular use case. I'm building a review system where a User may build a review of several distinct types with a Polymorphic Reviewable. Country (has_many reviews & cities) Subdivision/State (optional, sometimes it doesnt exist, also reviewable, has_many cities) City (has places & review) Burrow (optional, also reviewable ex: Brooklyn) Neighborhood (optional & reviewable, ex: williamsburg) Place (belongs to city) I'm also wondering about adding more complexity. I also want to include subdivisions occasionally... ie for the US, I might add Texas or for Germany, Baveria and have it be reviewable as well but not every country has regions and even those that do might never be reviewed. So it's not at all strict. I would like it to as simple and flexible as possible. It'd kinda be nice if the user could just land on one form and select either a city or a country, and then drill down using data from say Foursquare to find a particular place in a city and make a review. I'm really not sure which route I should take? For example, what happens if I have a Country, and a City... and then I decide to add a Burrow? Could I give places tags (ie Williamsburg, Brooklyn) belong_to NY City and the tags belong to NY? Tags are more flexible and optionally explain what areas they might be in, the tags belong to a city, but also have places and be reviewable? So I'm looking for suggestions for anyone who's done something related. Using Rails 3.2, and mongoid.

    Read the article

  • Java EE Website Planning Questions

    - by Tom Tresansky
    I'm a .NET programming who is soon moving to the Java EE world. I have plenty of experience with .NET web technologies, web services, WebForms and MVC. I am also very familiar with the Java language, and have written a few servlets and modified a couple of JSP pages, but I haven't touched EE yet. I'd like to set up a public website using Java EE so I can familiarize myself with whats current. I'm thinking just a technology playground at this point with no particular purpose in mind. What Java technologies are the current hotness for this sort of thing? (For example, if someone asked me what I'd recommend learning to set up a new .NET site, I'd say use ASP MVC instead of WebForms and recommend LINQ-to-SQL as a quick, simple and widely used ORM.) So, what I'd like to know is: Is there a recommended technology for the presentation layer? Is JSP considered a good approach, or is there anything cleaner/newer/more widespread? Is Hibernate still widely used for persistence? Is it obsolete? Is there anything better out there? (I've worked with NHibernate some, so I wouldn't be starting from scratch.) Is cheap Java EE web hosting available? What should I know being a .NET web developer moving to the Java world?

    Read the article

  • Planning a competition

    - by Jérôme
    I need to produce the schedule of a sport-event. There are 30 teams. Each team has to play 8 matches. This means that it is not possible for each team to compete again all other teams, but I need to avoid that two team compete more than once against each other. My idea was to generate all possible matches (for 30 teams: (30*29)/2 = 435 matches) and select from this list 120 matches (8 match for each team: 8 * 30 / 2 = 120 matches). This is where I'm having a hard time: how can I select these 120 matches? I tried some simple solutions (take first match of the list, then the last, and so on) but they don't seem to work with 30 teams. I also tried to generate all possible match combination and find which one is working but with 30 team, this is too much calculation time. Is there an existing algorithm that I could implement?

    Read the article

  • Planning a skillset for a fallback career [closed]

    - by Davy Kavanagh
    I'm not too certain this is a SO question, but I didn't think it belonged in meta either. Long story short, I am bioinformatics researcher. I like to code, it's my favourite part of the job. I have been thinking for a while that if academia is not kind to me, I might seek a career in software development. My current contract is for three years and I would like to spend some time over the next 3 three years learning and practicing software development as possible. Python seems like a popular language and it what I mostly use to do things for me, but I am also in heavy use of R. So my main question is: Are python and R good things to be learning with a sotfware dev goal in mind, and if so, is there any particular type of programming or software that might be useful to have experience with. Hard questions to answer I know, but I thought I would get the answer from people who are in the know. Cheers, Davy.

    Read the article

  • Simple network gaming, client-server architecture planning.

    - by michal
    Hi, I'm coding simple game which I plan to make multiplayer (over the network) as my university project. I'm considering two scenarios for client-server communication: The physics (they're trivial! I should call it "collision tests" in fact :) ) are processed on server machine only. Therefore the communication looks like Client1->Server: Pressed "UP" Server->Clients: here you go, Client1 position is now [X,Y] Client2->Server: Pressed "fire" Server->Clients: Client1 hit Client2, make Client2 disappear! server receives the event and broadcasts it to all the other clients. Client1->Server: Pressed "UP" Server->Clients: Client1 pressed "UP", recalculate his position!! [Client1 receives this one as well!] Which one is better? Or maybe none of them? :)

    Read the article

  • Initial capacity of collection types, i.e. Dictionary, List

    - by Neil N
    Certain collection types in .Net have an optional "Initial Capacity" constructor param. i.e. Dictionary<string, string> something = new Dictionary<string,string>(20); List<string> anything = new List<string>(50); I can't seem to find what the default initial capacity is for these objects on MSDN. If I know I will only be storing 12 or so items in a dictionary, doesn't it make sense to set the initial capacity to something like 20? My reasoning is, assuming the capacity grows like it does for a StringBuiler, which doubles each time the capacity is hit, and each re-allocation is costly, why not pre-set the size to something you know will hold your data, with some extra room just in case? If the initial capacity is 100, and I know I will only need a dozen or so, it seems as though the rest of that allocated RAM is allocated for nothing. Please spare me the "premature optimization" speil for the O(n^n)th time. I know it won't make my apps any faster or save any meaningful amount of memory, this is mostly out of curiosity.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >