Search Results

Search found 5122 results on 205 pages for 'max shawabkeh'.

Page 15/205 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Set primary key with two integers

    - by user299196
    I have a table with primary key (ColumnA, ColumnB). I want to make a function or procedure that when passed two integers will insert a row into the table but make sure the largest integer always goes into ColumnA and the smaller one into ColumnB. So if we have SetKeysWithTheseNumbers(17, 19) would return |-----------------| |ColumnA | ColumnB| |-----------------| |19 | 17 | |-----------------| SetKeysWithTheseNumbers(19, 17) would return the same thing |-----------------| |ColumnA | ColumnB| |-----------------| |19 | 17 | |-----------------|

    Read the article

  • MySQL query puzzle - finding what WOULD have been the most recent date

    - by Hank
    I've looked all over and haven't yet found an intelligent way to handle this, though I feel sure one is possible: One table of historical data has quarterly information: CREATE TABLE Quarterly ( unique_ID INT UNSIGNED NOT NULL, date_posted DATE NOT NULL, datasource TINYINT UNSIGNED NOT NULL, data FLOAT NOT NULL, PRIMARY KEY (unique_ID)); Another table of historical data (which is very large) contains daily information: CREATE TABLE Daily ( unique_ID INT UNSIGNED NOT NULL, date_posted DATE NOT NULL, datasource TINYINT UNSIGNED NOT NULL, data FLOAT NOT NULL, qtr_ID INT UNSIGNED, PRIMARY KEY (unique_ID)); The qtr_ID field is not part of the feed of daily data that populated the database - instead, I need to retroactively populate the qtr_ID field in the Daily table with the Quarterly.unique_ID row ID, using what would have been the most recent quarterly data on that Daily.date_posted for that data source. For example, if the quarterly data is 101 2009-03-31 1 4.5 102 2009-06-30 1 4.4 103 2009-03-31 2 7.6 104 2009-06-30 2 7.7 105 2009-09-30 1 4.7 and the daily data is 1001 2009-07-14 1 3.5 ?? 1002 2009-07-15 1 3.4 && 1003 2009-07-14 2 2.3 ^^ then we would want the ?? qtr_ID field to be assigned '102' as the most recent quarter for that data source on that date, and && would also be '102', and ^^ would be '104'. The challenges include that both tables (particularly the daily table) are actually very large, they can't be normalized to get rid of the repetitive dates or otherwise optimized, and for certain daily entries there is no preceding quarterly entry. I have tried a variety of joins, using datediff (where the challenge is finding the minimum value of datediff greater than zero), and other attempts but nothing is working for me - usually my syntax is breaking somewhere. Any ideas welcome - I'll execute any basic ideas or concepts and report back.

    Read the article

  • Second largest number in list python

    - by Manu Lakaster
    So I have to find THE SECOND LARGEST NUMBER IN A LIST. I am doing it through simple loops.My approach is I am going to divide a list into two parts and then find the largest number into two parts and then compare two nuumbers. I will choose the smaller number from two of them. I can not use ready functions or different approaches. Basically, this is my code....But it does not run correctly....Help me please to fix it because I spent a lot of time on it :( Thanks....P.S. Can we use indices to "divide" a list ??? #!/usr/local/bin/python2.7 alist=[-45,0,3,10,90,5,-2,4,18,45,100,1,-266,706] largest=alist[0] h=len(alist)/2 m=len(alist)-h print(alist) for i in alist: if alist[h]>largest: largest=alist[h] i=i+1 print(largest)

    Read the article

  • Adwords: Is there a drawback to setting a really high max CPC to learn what works faster?

    - by Rob Sobers
    I'm toying with increasing my max CPC really high on all my keywords so ensure my ad gets shown in the top spot on page one in order to draw more clicks. I think this will be a good way to quickly figure out whether the ads I'm writing have a decent CTR and, more importantly, whether the landing pages I'm building are converting. Since I can set a max daily budget for my campaign, I won't risk breaking the bank. I can't think of any drawbacks, personally. Am I missing any?

    Read the article

  • Exporting an animated FBX to XNA? (in 3DS Max)

    - by Itamar Marom
    I'm now working on an XNA 3D game, and I want to add animated models in it. I came across this example. I see there is one FBX file and a few texture files in the content project, and that in the code you can choose which "take" to play. In this code it is "Take_001". Please tell me: When I create and animate my own 3D model in 3DS Max (2012, since I was told it's only possible in this version), how can I define those takes? plus, are any configurations need to be made when exporting FBX from 3DS Max to XNA? Thank you.

    Read the article

  • Why doesn't max-height hold in Mobile safari in landscape mode?

    - by Mick79
    I am building a small portfolio site for myself and have come across an odd quirk. I have an image inside a container, and to allow for multiple screen sizes, I am setting all dimensions in % rather than pixels. in iphone portrait mode, everything is fine. However in landscape mode, my image bursts out of its container, completely ignoring the max-height:100%; rule that works fine in portrait. code: #centralident{ position:relative; width:50%; height:50%; box-shadow: 0 0 10px black; margin-left:25%; margin-top:13%; } #centralident img{ max-height:100%; }

    Read the article

  • Is it good practice to keep 2 related tables (using auto_increment PK) to have the same Max of auto_increment ID when table1 got modified?

    - by Tum
    This question is about good design practice in programming. Let see this example, we have 2 interrelated tables: Table1 textID - text 1 - love.. 2 - men... ... Table2 rID - textID 1 - 1 2 - 2 ... Note: In Table1: textID is auto_increment primary key In Table2: rID is auto_increment primary key & textID is foreign key The relationship is that 1 rID will have 1 and only 1 textID but 1 textID can have a few rID. So, when table1 got modification then table2 should be updated accordingly. Ok, here is a fictitious example. You build a very complicated system. When you modify 1 record in table1, you need to keep track of the related record in table2. To keep track, you can do like this: Option 1: When you modify a record in table1, you will try to modify a related record in table 2. This could be quite hard in term of programming expecially for a very very complicated system. Option 2: instead of modifying a related record in table2, you decided to delete old record in table 2 & insert new one. This is easier for you to program. For example, suppose you are using option2, then when you modify record 1,2,3,....,100 in table1, the table2 will look like this: Table2 rID - textID 101 - 1 102 - 2 ... 200 - 100 This means the Max of auto_increment IDs in table1 is still the same (100) but the Max of auto_increment IDs in table2 already reached 200. what if the user modify many times? if they do then the table2 may run out of records? we can use BigInt but that make the app run slower? Note: If you spend time to program to modify records in table2 when table1 got modified then it will be very hard & thus it will be error prone. But if you just clear the old record & insert new records into table2 then it is much easy to program & thus your program is simpler & less error prone. So, is it good practice to keep 2 related tables (using auto_increment PK) to have the same Max of auto_increment ID when table1 got modified?

    Read the article

  • Shadows shimmer when camera moves

    - by Chad Layton
    I've implemented shadow maps in my simple block engine as an exercise. I'm using one directional light and using the view volume to create the shadow matrices. I'm experiencing some problems with the shadows shimmering when the camera moves and I'd like to know if it's an issue with my implementation or just an issue with basic/naive shadow mapping itself. Here's a video: http://www.youtube.com/watch?v=vyprATt5BBg&feature=youtu.be Here's the code I use to create the shadow matrices. The commented out code is my original attempt to perfectly fit the view frustum. You can also see my attempt to try clamping movement to texels in the shadow map which didn't seem to make any difference. Then I tried using a bounding sphere instead, also to no apparent effect. public void CreateViewProjectionTransformsToFit(Camera camera, out Matrix viewTransform, out Matrix projectionTransform, out Vector3 position) { BoundingSphere cameraViewFrustumBoundingSphere = BoundingSphere.CreateFromFrustum(camera.ViewFrustum); float lightNearPlaneDistance = 1.0f; Vector3 lookAt = cameraViewFrustumBoundingSphere.Center; float distanceFromLookAt = cameraViewFrustumBoundingSphere.Radius + lightNearPlaneDistance; Vector3 directionFromLookAt = -Direction * distanceFromLookAt; position = lookAt + directionFromLookAt; viewTransform = Matrix.CreateLookAt(position, lookAt, Vector3.Up); float lightFarPlaneDistance = distanceFromLookAt + cameraViewFrustumBoundingSphere.Radius; float diameter = cameraViewFrustumBoundingSphere.Radius * 2.0f; Matrix.CreateOrthographic(diameter, diameter, lightNearPlaneDistance, lightFarPlaneDistance, out projectionTransform); //Vector3 cameraViewFrustumCentroid = camera.ViewFrustum.GetCentroid(); //position = cameraViewFrustumCentroid - (Direction * (camera.FarPlaneDistance - camera.NearPlaneDistance)); //viewTransform = Matrix.CreateLookAt(position, cameraViewFrustumCentroid, Up); //Vector3[] cameraViewFrustumCornersWS = camera.ViewFrustum.GetCorners(); //Vector3[] cameraViewFrustumCornersLS = new Vector3[8]; //Vector3.Transform(cameraViewFrustumCornersWS, ref viewTransform, cameraViewFrustumCornersLS); //Vector3 min = cameraViewFrustumCornersLS[0]; //Vector3 max = cameraViewFrustumCornersLS[0]; //for (int i = 1; i < 8; i++) //{ // min = Vector3.Min(min, cameraViewFrustumCornersLS[i]); // max = Vector3.Max(max, cameraViewFrustumCornersLS[i]); //} //// Clamp to nearest texel //float texelSize = 1.0f / Renderer.ShadowMapSize; //min.X -= min.X % texelSize; //min.Y -= min.Y % texelSize; //min.Z -= min.Z % texelSize; //max.X -= max.X % texelSize; //max.Y -= max.Y % texelSize; //max.Z -= max.Z % texelSize; //// We just use an orthographic projection matrix. The sun is so far away that it's rays are essentially parallel. //Matrix.CreateOrthographicOffCenter(min.X, max.X, min.Y, max.Y, -max.Z, -min.Z, out projectionTransform); } And here's the relevant part of the shader: if (CastShadows) { float4 positionLightCS = mul(float4(position, 1.0f), LightViewProj); float2 texCoord = clipSpaceToScreen(positionLightCS) + 0.5f / ShadowMapSize; float shadowMapDepth = tex2D(ShadowMapSampler, texCoord).r; float distanceToLight = length(LightPosition - position); float bias = 0.2f; if (shadowMapDepth < (distanceToLight - bias)) { return float4(0.0f, 0.0f, 0.0f, 0.0f); } } The shimmer is slightly better if I drastically reduce the view volume but I think that's mostly just because the texels become smaller and it's harder to notice them flickering back and forth. I'd appreciate any insight, I'd very much like to understand what's going on before I try other techniques.

    Read the article

  • SQL -- How to combine three SELECT statements with very tricky requirements

    - by Frederick
    I have a SQL query with three SELECT statements. A picture of the data tables generated by these three select statements is located at www.britestudent.com/pub/1.png. Each of the three data tables have identical columns. I want to combine these three tables into one table such that: (1) All rows in top table (Table1) are always included. (2) Rows in the middle table (Table2) are included only when the values in column1 (UserName) and column4 (CourseName) do not match with any row from Table1. Both columns need to match for the row in Table2 to not be included. (3) Rows in the bottom table (Table3) are included only when the value in column4 (CourseName) is not already in any row of the results from combining Table1 and Table2. I have had success in implementing (1) and (2) with an SQL query like this: SELECT DISTINCT UserName AS UserName, MAX(AmountUsed) AS AmountUsed, MAX(AnsweredCorrectly) AS AnsweredCorrectly, CourseName, MAX(course_code) AS course_code, MAX(NoOfQuestionsInCourse) AS NoOfQuestionsInCourse, MAX(NoOfQuestionSetsInCourse) AS NoOfQuestionSetsInCourse FROM ( "SELECT statement 1" UNION "SELECT statement 2" ) dt_derivedTable_1 GROUP BY CourseName, UserName Where "SELECT statement 1" is the query that generates Table1 and "SELECT statement 2" is the query that generates Table2. A picture of the data table generated by this query is located at www.britestudent.com/pub/2.png. I can get away with using the MAX() function because values in the AmountUsed and AnsweredCorrectly columns in Table1 will always be larger than those in Table2 (and they are identical in the last three columns of both tables). What I fail at is implementing (3). Any suggestions on how to do this will be appreciated. It is tricky because the UserName values in Table3 are null, and because the CourseName values in the combined Table1 and Table2 results are not unique (but they are unique in Table3). After implementing (3), the final table should look like the table in picture 2.png with the addition of the last row from Table3 (the row with the CourseName value starting with "4. Klasse..." I have tried to implement (3) using another derived table using SELECT, MAX() and UNION, but I could not get it to work. Below is my full SQL query with the lines from this failed attempt to implement (3) commented out. Cheers, Frederick PS--I am new to this forum (and new to SQL as well), but I have had more of my previous problems answered by reading other people's posts on this forum than from reading any other forum or Web site. This forum is a great resources. -- SELECT DISTINCT MAX(UserName), MAX(AmountUsed) AS AmountUsed, MAX(AnsweredCorrectly) AS AnsweredCorrectly, CourseName, MAX(course_code) AS course_code, MAX(NoOfQuestionsInCourse) AS NoOfQuestionsInCourse, MAX(NoOfQuestionSetsInCourse) AS NoOfQuestionSetsInCourse -- FROM ( SELECT DISTINCT UserName AS UserName, MAX(AmountUsed) AS AmountUsed, MAX(AnsweredCorrectly) AS AnsweredCorrectly, CourseName, MAX(course_code) AS course_code, MAX(NoOfQuestionsInCourse) AS NoOfQuestionsInCourse, MAX(NoOfQuestionSetsInCourse) AS NoOfQuestionSetsInCourse FROM ( -- Table 1 - All UserAccount/Course combinations that have had quizzez. SELECT DISTINCT dbo.win_user.user_name AS UserName, cast(dbo.GetAmountUsed(dbo.session_header.win_user_id, dbo.course.course_id, dbo.course.no_of_questionsets_in_course) as nvarchar(10)) AS AmountUsed, Isnull(cast(dbo.GetAnswerCorrectly(dbo.session_header.win_user_id, dbo.course.course_id, dbo.question_set.no_of_questions) as nvarchar(10)),0) AS AnsweredCorrectly, dbo.course.course_name AS CourseName, dbo.course.course_code, dbo.course.no_of_questions_in_course AS NoOfQuestionsInCourse, dbo.course.no_of_questionsets_in_course AS NoOfQuestionSetsInCourse FROM dbo.session_detail INNER JOIN dbo.session_header ON dbo.session_detail.session_header_id = dbo.session_header.session_header_id INNER JOIN dbo.win_user ON dbo.session_header.win_user_id = dbo.win_user.win_user_id INNER JOIN dbo.win_user_course ON dbo.win_user_course.win_user_id = dbo.win_user.win_user_id INNER JOIN dbo.question_set ON dbo.session_header.question_set_id = dbo.question_set.question_set_id RIGHT OUTER JOIN dbo.course ON dbo.win_user_course.course_id = dbo.course.course_id WHERE (dbo.session_detail.no_of_attempts = 1 OR dbo.session_detail.no_of_attempts IS NULL) AND (dbo.session_detail.is_correct = 1 OR dbo.session_detail.is_correct IS NULL) AND (dbo.win_user_course.is_active = 'True') GROUP BY dbo.win_user.user_name, dbo.course.course_name, dbo.question_set.no_of_questions, dbo.course.no_of_questions_in_course, dbo.course.no_of_questionsets_in_course, dbo.session_header.win_user_id, dbo.course.course_id, dbo.course.course_code UNION ALL -- Table 2 - All UserAccount/Course combinations that do or do not have quizzes but where the Course is selected for quizzes for that User Account. SELECT dbo.win_user.user_name AS UserName, -1 AS AmountUsed, -1 AS AnsweredCorrectly, dbo.course.course_name AS CourseName, dbo.course.course_code, dbo.course.no_of_questions_in_course AS NoOfQuestionsInCourse, dbo.course.no_of_questionsets_in_course AS NoOfQuestionSetsInCourse FROM dbo.win_user_course INNER JOIN dbo.win_user ON dbo.win_user_course.win_user_id = dbo.win_user.win_user_id RIGHT OUTER JOIN dbo.course ON dbo.win_user_course.course_id = dbo.course.course_id WHERE (dbo.win_user_course.is_active = 'True') GROUP BY dbo.win_user.user_name, dbo.course.course_name, dbo.course.no_of_questions_in_course, dbo.course.no_of_questionsets_in_course, dbo.course.course_id, dbo.course.course_code ) dt_derivedTable_1 GROUP BY CourseName, UserName -- UNION ALL -- Table 3 - All Courses. -- SELECT DISTINCT null AS UserName, -- -2 AS AmountUsed, -- -2 AS AnsweredCorrectly, -- dbo.course.course_name AS CourseName, -- dbo.course.course_code, -- dbo.course.no_of_questions_in_course AS NoOfQuestionsInCourse, -- dbo.course.no_of_questionsets_in_course AS NoOfQuestionSetsInCourse -- FROM dbo.course -- WHERE is_active = 'True' -- ) dt_derivedTable_2 -- GROUP BY CourseName -- ORDER BY CourseName

    Read the article

  • sensors reporting weird temperatures

    - by Felix
    lm-sensors is reporting weird temps for me: $ sensors coretemp-isa-0000 Adapter: ISA adapter Core 0: +38.0°C (high = +72.0°C, crit = +100.0°C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +35.0°C (high = +72.0°C, crit = +100.0°C) coretemp-isa-0002 Adapter: ISA adapter Core 2: +32.0°C (high = +72.0°C, crit = +100.0°C) coretemp-isa-0003 Adapter: ISA adapter Core 3: +42.0°C (high = +72.0°C, crit = +100.0°C) w83627dhg-isa-0290 Adapter: ISA adapter Vcore: +1.10 V (min = +0.00 V, max = +1.74 V) in1: +1.62 V (min = +0.06 V, max = +0.17 V) ALARM AVCC: +3.34 V (min = +2.98 V, max = +3.63 V) VCC: +3.34 V (min = +2.98 V, max = +3.63 V) in4: +1.83 V (min = +1.30 V, max = +1.15 V) ALARM in5: +1.26 V (min = +0.83 V, max = +1.03 V) ALARM in6: +0.11 V (min = +1.22 V, max = +0.56 V) ALARM 3VSB: +3.30 V (min = +2.98 V, max = +3.63 V) Vbat: +3.18 V (min = +2.70 V, max = +3.30 V) fan1: 0 RPM (min = 0 RPM, div = 128) ALARM fan2: 1117 RPM (min = 860 RPM, div = 8) fan3: 0 RPM (min = 10546 RPM, div = 128) ALARM fan4: 0 RPM (min = 10546 RPM, div = 128) ALARM fan5: 0 RPM (min = 10546 RPM, div = 128) ALARM temp1: +88.0°C (high = +20.0°C, hyst = +4.0°C) ALARM sensor = diode temp2: +25.0°C (high = +80.0°C, hyst = +75.0°C) sensor = diode temp3: +121.5°C (high = +80.0°C, hyst = +75.0°C) ALARM sensor = thermistor cpu0_vid: +2.050 V Please note temp3. How can I know what temp3 is, and why it is so high? The system is really stable (which I guess it wouldn't be at those temps). Also, note the really decent core temps, which suggest a healthy system as well. My guess is that the readout is wrong. On another computer it reported temperatures below 0 degrees centigrade, which was not possible, considering the environment temperature of ~22-24. Is this some known bug/issue? Should I try some Windows programs (like CPU-Z) and see they give similar results?

    Read the article

  • lm-sensor and cpu temperatures

    - by nalsanj
    i am on ubuntu Precise Pangolin. The processor is Intel i3. a desktop. i installed lm-sensors and below is the report "sensors" gave coretemp-isa-0000 Adapter: ISA adapter Core 0: +30.0°C (high = +89.0°C, crit = +105.0°C) Core 2: +33.0°C (high = +89.0°C, crit = +105.0°C) w83627dhg-isa-0a10 Adapter: ISA adapter Vcore: +0.93 V (min = +0.00 V, max = +1.74 V) in1: +0.75 V (min = +1.99 V, max = +1.99 V) ALARM AVCC: +3.36 V (min = +2.98 V, max = +3.63 V) +3.3V: +3.36 V (min = +2.98 V, max = +3.63 V) in4: +1.30 V (min = +0.90 V, max = +1.77 V) in5: +0.76 V (min = +1.15 V, max = +0.90 V) ALARM in6: +1.06 V (min = +0.94 V, max = +2.03 V) 3VSB: +3.36 V (min = +2.98 V, max = +3.63 V) Vbat: +3.36 V (min = +2.70 V, max = +3.30 V) ALARM fan1: 0 RPM (min = 3515 RPM, div = 128) ALARM fan2: 0 RPM (min = 10546 RPM, div = 128) ALARM fan3: 0 RPM (min = 10546 RPM, div = 128) ALARM fan5: 0 RPM (min = 10546 RPM, div = 128) ALARM temp1: +39.0°C (high = -121.0°C, hyst = +9.0°C) ALARM sensor = diode temp2: +39.0°C (high = +80.0°C, hyst = +75.0°C) sensor = diode temp3: +127.0°C (high = +80.0°C, hyst = +75.0°C) ALARM sensor = thermistor cpu0_vid: +2.050 V intrusion0: OK radeon-pci-0100 Adapter: PCI adapter temp1: +70.5°C The fans sensors are detecting 0 RPM and some temperatures are out of range - the ALARMs above but i dont understand it very well. Can someone help out?

    Read the article

  • How to set shmall, shmmax, shmni, etc ... in general and for postgresql

    - by jpic
    I've used the documentation from PostgreSQL to set it for example this config: >>> cat /proc/meminfo MemTotal: 16345480 kB MemFree: 1770128 kB Buffers: 382184 kB Cached: 10432632 kB SwapCached: 0 kB Active: 9228324 kB Inactive: 4621264 kB Active(anon): 7019996 kB Inactive(anon): 548528 kB Active(file): 2208328 kB Inactive(file): 4072736 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 3432 kB Writeback: 0 kB AnonPages: 3034588 kB Mapped: 4243720 kB Shmem: 4533752 kB Slab: 481728 kB SReclaimable: 440712 kB SUnreclaim: 41016 kB KernelStack: 1776 kB PageTables: 39208 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 8172740 kB Committed_AS: 14935216 kB VmallocTotal: 34359738367 kB VmallocUsed: 399340 kB VmallocChunk: 34359334908 kB HardwareCorrupted: 0 kB AnonHugePages: 456704 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 12288 kB DirectMap2M: 16680960 kB >>> ipcs -l ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 4316816 max total shared memory (kbytes) = 4316816 min seg size (bytes) = 1 ------ Semaphore Limits -------- max number of arrays = 128 max semaphores per array = 250 max semaphores system wide = 32000 max ops per semop call = 32 semaphore max value = 32767 ------ Messages Limits -------- max queues system wide = 31918 max size of message (bytes) = 8192 default max size of queue (bytes) = 16384 sysctl.conf extract: kernel.shmall = 1079204 kernel.shmmax = 4420419584 postgresql.conf non defaults: max_connections = 60 # (change requires restart) shared_buffers = 4GB # min 128kB work_mem = 4MB # min 64kB wal_sync_method = open_sync # the default is the first option checkpoint_segments = 16 # in logfile segments, min 1, 16MB each checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0 effective_cache_size = 6GB Is this appropriate ? If not (or not necessarily), in which case would it be appropriate ? We did note nice performance improvements with this config, how would you improve it ? How should kernel memory management parameters be set ? Can anybody explain how to really set them from the ground up ?

    Read the article

  • Is a safe accumulator really this complicated?

    - by Martin
    I'm trying to write an accumulator that is well behaved given unconstrained inputs. This seems to not be trivial and requires some pretty strict planning. Is it really this hard? int naive_accumulator(unsigned int max, unsigned int *accumulator, unsigned int amount) { if(*accumulator + amount >= max) return 1; // could overflow *accumulator += max; // could overflow return 0; } int safe_accumulator(unsigned int max, unsigned int *accumulator, unsigned int amount) { // if amount >= max, then certainly *accumulator + amount >= max if(amount >= max) { return 1; } // based on the comparison above, max - amount is defined // but *accumulator + amount might not be if(*accumulator >= max - amount) { return 1; } // based on the comparison above, *accumulator + amount is defined // and *accumulator + amount < max *accumulator += amount; return 0; }

    Read the article

  • Need to hookup HP dv7-3085dx with Nvidia GeForce GT 230M to my Dell 30 inch LCD 3007WFP at max resol

    - by user14660
    I recently bought an HP laptop (dv7-3085dx) (http://reviews.cnet.com/laptops/hp-pavilion-dv7-3085dx/4505-3121_7-33776108.html) which is supposed to have a pretty good video card (NVIDIA GeForce GT 230M). The card is supposed to output a max resolution of 2560x1600 which is also the max resolution of my monitor (http://www.ubergizmo.com/15/archives/2006/02/dell_3007wfp_on_dell_2001fp_action_8_megapixel_desktop.html). Now I bought an HDMI to dual link dvi (http://www.amazon.com/gp/product/B002KKLYDK/ref=oss_product) cable...this is after Best Buy's 70 dollar hdmi to dvi (perhaps it was 'single' link?) didn't give me the best resolution. In windows 7, when I try to set the max resolution for my 30 in monitor, I only get 1280x800...which is absurd. The monitor is great, I love the laptop and the video card supposedly supports such resolutions. So I can't figure out why I'm not getting better resolution (by the way, when i "detect" my monitor in windows 7, it is shown correctly as DELL 3007WFP!).

    Read the article

  • How do ulimit -n and /proc/sys/fs/file-max differ?

    - by bantic
    I notice that on a new CentOS image that I just booted up off of EC2 that the ulimit default is 1024 open files, but /proc/sys/fs/file-max is set at 761,408 and I'm wondering how these two limits work together. I'm guessing that ulimit -n is a per-user limit of number of file descriptors while /proc/sys/fs/file-max is system-wide? If that's the case, say I've logged in twice as the same user -- does each logged-in user have a 1024 limit on number of open files, or is it a limit of 1024 combined open files between each of those logged-in users? And is there much performance impact to setting your max file descriptors to a very high number, if your system isn't ever opening very many files?

    Read the article

  • Need to hookup HP dv7-3085dx with NVIDIA GeForce GT 230M to my Dell 30 inch LCD 3007WFP at max resolution

    - by user14660
    I recently bought an HP laptop (dv7-3085dx) which is supposed to have a pretty good video card (NVIDIA GeForce GT 230M). The card is supposed to output a max resolution of 2560x1600 which is also the max resolution of my monitor. I've now bought an HDMI to dual link DVI cable - this is after Best Buy's 70 dollar HDMI to DVI (perhaps it was 'single' link?) didn't give me the best resolution. In Windows 7, when I try to set the max resolution for my 30" monitor, I only get 1280x800, which is absurd. The monitor is great, I love the laptop and the video card supposedly supports such resolutions. I therefore can't figure out why I'm not getting a better resolution. When I "detect" my monitor in Windows 7, it is shown correctly as a DELL 3007WFP!

    Read the article

  • Performance issues when using SSD for a developer notebook (WAMP/LAMP stack)?

    - by András Szepesházi
    I'm a web application developer using my notebook as a standalone development environment (WAMP stack). I just switched from a Core2-duo Vista 32 bit notebook with 2Gb RAM and SATA HDD, to an i5-2520M Win7 64 bit with 4Gb RAM and 128 GB SDD (Corsair P3 128). My initial experience was what I expected, fast boot, quick load of all the applications (Eclipse takes now 5 seconds as opposed to 30s on my old notebook), overall great experience. Then I started to build up my development stack, both as LAMP (using VirtualBox with a debian guest) and WAMP (windows native apache + mysql + php). I wanted to compare those two. This still all worked great out, then I started to pull in my projects to these stacks. And here came the nasty surprise, one of those projects produced a lot worse response times than on my old notebook (that was true for both the VirtualBox and WAMP stack). Apache, php and mysql configurations were practically identical in all environments. I started to do a lot of benchmarking and profiling, and here is what I've found: All general benchmarks (Performance Test 7.0, HDTune Pro, wPrime2 and some more) gave a big advantage to the new notebook. Nothing surprising here. Disc specific tests showed that read/write operations peaked around 380M/160M for the SSD, and all the different sized block operations also performed very well. Started apache performance benchmarking with Apache Benchmark for a small static html file (10 concurrent threads, 500 iterations). Old notebook: min 47ms, median 111ms, max 156ms New WAMP stack: min 71ms, median 135ms, max 296ms New LAMP stack (in VirtualBox): min 6ms, median 46ms, max 175ms Right here I don't get why the native WAMP stack performed so bad, but at least the LAMP environment brought the expected speed. Apache performance measurement for non-cached php content. The php runs a loop of 1000 and generates sha1(uniqid()) inisde. Again, 10 concurrent threads, 500 iterations were used for the benchmark. Old notebook: min 0ms, median 39ms, max 218ms New WAMP stack: min 20ms, median 61ms, max 186ms New LAMP stack (in VirtualBox): min 124ms, median 704ms, max 2463ms What the hell? The new LAMP performed miserably, and even the new native WAMP was outperformed by the old notebook. php + mysql test. The test consists of connecting to a database and reading a single record form a table using INNER JOIN on 3 more (indexed) tables, repeated 100 times within a loop. Databases were identical. 10 concurrent threads, 100 iterations were used for the benchmark. Old notebook: min 1201ms, median 1734ms, max 3728ms New WAMP stack: min 367ms, median 675ms, max 1893ms New LAMP stack (in VirtualBox): min 1410ms, median 3659ms, max 5045ms And the same test with concurrency set to 1 (instead of 10): Old notebook: min 1201ms, median 1261ms, max 1357ms New WAMP stack: min 399ms, median 483ms, max 539ms New LAMP stack (in VirtualBox): min 285ms, median 348ms, max 444ms Strictly for my purposes, as I'm using a self contained development environment (= low concurrency) I could be satisfied with the second test's result. Though I have no idea why the VirtualBox environment performed so bad with higher concurrency. Finally I performed a test of including many php files. The application that I mentioned at the beginning, the one that was performing so bad, has a heavy bootstrap, loads hundreds of small library and configuration files while initializing. So this test does nothing else just includes about 100 files. Concurrency set to 1, 100 iterations: Old notebook: min 140ms, median 168ms, max 406ms New WAMP stack: min 434ms, median 488ms, max 604ms New LAMP stack (in VirtualBox): min 413ms, median 1040ms, max 1921ms Even if I consider that VirtualBox reached those files via shared folders, and that slows things down a bit, I still don't see how could the old notebook outperform so heavily both new configurations. And I think this is the real root of the slow performance, as the application uses even more includes, and the whole bootstrap will occur several times within a page request (for each ajax call, for example). To sum it up, here I am with a brand new high-performance notebook that loads the same page in 20 seconds, that my old notebook can do in 5-7 seconds. Needless to say, I'm not a very happy person right now. Why do you think I experience these poor performance values? What are my options to remedy this situation?

    Read the article

  • How can I make TextToSpeech to speak a text with max volume and restore original volume after speak end?

    - by HelloCW
    I save the current volume both STREAM_RING and STREAM_MUSIC before sTts.get().speak(s, TextToSpeech.QUEUE_ADD, null), I hope the TextToSpeech can speak a text with max volume, but in fact I find the TextToSpeech speak the text with current volume, it seems that sTts.get().speak is asynchronous. How can I make TextToSpeech to speak a text with max volume and restore original volume after speak end? Thanks! public class SpeechTxt { private static SoftReference<TextToSpeech> sTts; public static void SpeakOut(final Context context, final String s) { final Context appContext = context.getApplicationContext(); if (sTts == null) { sTts = new SoftReference<TextToSpeech>(new TextToSpeech(appContext, new TextToSpeech.OnInitListener() { @Override public void onInit(int status) { if (status == TextToSpeech.SUCCESS) { speak(appContext, s); } else { } } })); } else { speak(appContext, s); } } private static void speak(Context context, String s) { if (sTts != null) { switch (sTts.get().setLanguage(Locale.getDefault())) { case TextToSpeech.LANG_COUNTRY_AVAILABLE: case TextToSpeech.LANG_COUNTRY_VAR_AVAILABLE: case TextToSpeech.LANG_AVAILABLE: { sTts.get().setPitch((float) 0.6); sTts.get().setSpeechRate((float) 0.8); int currentRing=PublicParFun.GetCurrentVol(context, AudioManager.STREAM_RING); int currentPlay=PublicParFun.GetCurrentVol(context, AudioManager.STREAM_MUSIC); PublicParFun.SetRingVol(context, 0); PublicParFun.SetPlayVol(context,1000000); sTts.get().speak(s, TextToSpeech.QUEUE_ADD, null); PublicParFun.SetRingVol(context, currentRing); PublicParFun.SetPlayVol(context,currentPlay); break; } case TextToSpeech.LANG_MISSING_DATA: { break; } case TextToSpeech.LANG_NOT_SUPPORTED: // not much to do here } } } public static int GetCurrentVol(Context myContext,int streamType){ AudioManager mAudioManager = (AudioManager)myContext.getSystemService(Context.AUDIO_SERVICE); int current = mAudioManager.getStreamVolume( streamType); return current; } public static void SetRingVol(Context myContext,int vol){ SetVol(myContext,AudioManager.STREAM_RING, vol); } public static void SetPlayVol(Context myContext,int vol){ SetVol(myContext,AudioManager.STREAM_MUSIC, vol); } private static void SetVol(Context myContext,int streamType,int vol){ AudioManager mAudioManager = (AudioManager)myContext.getSystemService(Context.AUDIO_SERVICE); int max = mAudioManager.getStreamMaxVolume(streamType); if (vol>max){ vol=max; } mAudioManager.setStreamVolume(streamType,vol, 0); } }

    Read the article

  • How to get max of composite data in SQL?

    - by Siddharth Sinha
    SELECT "Name""Month","Year","Value" from Table WHERE "Name" LIKE '%JERRY%' AND "Year" = (SELECT MAX("Year") FROM Table where "Name" LIKE '%JERRY%') AND "Month"= (SELECT MAX("Month") FROM Table where "Name" LIKE '%JERRY%' AND "Year"= (SELECT MAX("Year") FROM Table where "Name" LIKE '%JERRY%')) Table -- Name | Year | Month | Value ----------------------------- JERRY 2012 9 100 JERRY 2012 9 120 JERRY 2012 9 130 JERRY 2012 8 20 JERRY 2011 12 50 So i want the first three rows as output. As for the latest month for the latest year i need all the values. Can someone suggest a better cleaner query?

    Read the article

  • What's the best way to select max over multiple fields in SQL?

    - by allyourcode
    The I kind of want to do is select max(f1, f2, f3). I know this doesn't work, but I think what I want should be pretty clear (see update 1). I was thinking of doing select max(concat(f1, '--', f2 ...)), but this has various disadvantages. In particular, doing concat will probably slow things down. What's the best way to get what I want? update 1: The answers I've gotten so far aren't what I'm after. max works over a set of records, but it compares them using only one value; I want max to consider several values, just like the way order by can consider several values. update 2: Suppose I have the following table: id class_name order_by1 order_by_2 1 a 0 0 2 a 0 1 3 b 1 0 4 b 0 9 I want a query that will group the records by class_name. Then, within each "class", select the record that would come first if you ordered by order_by1 ascending then order_by2 ascending. The result set would consist of records 2 and 3. In my magical query language, it would look something like this: select max(* order by order_by1 ASC, order_by2 ASC) from table group by class_name

    Read the article

  • How do i generate a random integer between min and max in java?

    - by David
    What method returns a random int between a min and max? Or does no such method exist? what i'm looking for is something like this: NAMEOFMETHOD (min, max) (where min and max are ints) that returns soemthing like this: 8 (randomly) if such a method does exist could you please link to the relevant documentation with your answer. thanks.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >