Search Results

Search found 4921 results on 197 pages for 'conditional execution'.

Page 163/197 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • Mysql select - improve performance

    - by realshadow
    Hey, I am working on an e-shop which sells products only via loans. I display 10 products per page in any category, each product has 3 different price tags - 3 different loan types. Everything went pretty well during testing time, query execution time was perfect, but today when transfered the changes to the production server, the site "collapsed" in about 2 minutes. The query that is used to select loan types sometimes hangs for ~10 seconds and it happens frequently and thus it cant keep up and its hella slow. The table that is used to store the data has approximately 2 milion records and each select looks like this: SELECT * FROM products_loans WHERE KOD IN("X17/Q30-10", "X17/12", "X17/5-24") AND 369.27 BETWEEN CENA_OD AND CENA_DO; 3 loan types and the price that needs to be in range between CENA_OD and CENA_DO, thus 3 rows are returned. But since I need to display 10 products per page, I need to run it trough a modified select using OR, since I didnt find any other solution to this. I have asked about it here, but got no answer. As mentioned in the referencing post, this has to be done separately since there is no column that could be used in a join (except of course price and code, but that ended very, very badly). Here is the show create table, kod and CENA_OD/CENA_DO very indexed via INDEX. CREATE TABLE `products_loans` ( `KOEF_ID` bigint(20) NOT NULL, `KOD` varchar(30) NOT NULL, `AKONTACIA` int(11) NOT NULL, `POCET_SPLATOK` int(11) NOT NULL, `koeficient` decimal(10,2) NOT NULL default '0.00', `CENA_OD` decimal(10,2) default NULL, `CENA_DO` decimal(10,2) default NULL, `PREDAJNA_CENA` decimal(10,2) default NULL, `AKONTACIA_SUMA` decimal(10,2) default NULL, `TYP_VYHODY` varchar(4) default NULL, `stage` smallint(6) NOT NULL default '1', PRIMARY KEY (`KOEF_ID`), KEY `CENA_OD` (`CENA_OD`), KEY `CENA_DO` (`CENA_DO`), KEY `KOD` (`KOD`), KEY `stage` (`stage`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 And also selecting all loan types and later filtering them trough php doesnt work good, since each type has over 50k records and the select takes too much time as well... Any ides about improving the speed are appreciated.

    Read the article

  • How does static code run with multiple threads?

    - by Krisc
    I was reading http://stackoverflow.com/questions/1511798/threading-from-within-a-class-with-static-and-non-static-methods and I am in a similar situation. I have a static method that pulls data from a resource and creates some runtime objects based on the data. static class Worker{ public static MyObject DoWork(string filename){ MyObject mo = new MyObject(); // ... does some work return mo; } } The method takes awhile (in this case it is reading 5-10mb files) and returns an object. I want to take this method and use it in a multiple thread situation so I can read multiple files at once. Design issues / guidelines aside, how would multiple threads access this code? Let's say I have something like this... class ThreadedWorker { public void Run() { Thread t = new Thread(OnRun); t.Start(); } void OnRun() { MyObject mo = Worker.DoWork("somefilename"); mo.WriteToConsole(); } } Does the static method run for each thread, allowing for parallel execution?

    Read the article

  • Delphi trace tool

    - by Max
    I was wondering if there's a tool or a component for Delphi that can trace method execution line by line and create a log file. With this kind of tool it is easy to compare how method performs on two sets of input data by comparing two log files. EDIT: Let's say there is a function 10: function MyFunction(aInput: Integer): Integer; 11: begin 12: if aInput 10 then 13: Result := 10 14: else 15: Result := 0; 16: end; I'm looking for a tool that would give the log which whould be similar to the following: When aInput parameter is 1: Line 10: 'function MyFunction(aInput: Integer): Integer;' Line 11: 'begin' Line 12: 'if aInput 10 then' Line 15: 'Result := 0;' Line 16: 'end;' and when aInput parameter is 11: Line 10: 'function MyFunction(aInput: Integer): Integer;' Line 11: 'begin' Line 12: 'if aInput 10 then' Line 13: 'Result := 10;' Line 16: 'end;' The only information that should be required by the tool is the function name. It's like stepping through the method under debugger, but in an automatic manner with logging every line of code.

    Read the article

  • Same query has nested loops when used with INSERT, but Hash Match without.

    - by AaronLS
    I have two tables, one has about 1500 records and the other has about 300000 child records. About a 1:200 ratio. I stage the parent table to a staging table, SomeParentTable_Staging, and then I stage all of it's child records, but I only want the ones that are related to the records I staged in the parent table. So I use the below query to perform this staging by joining with the parent tables staged data. --Stage child records INSERT INTO [dbo].[SomeChildTable_Staging] ([SomeChildTableId] ,[SomeParentTableId] ,SomeData1 ,SomeData2 ,SomeData3 ,SomeData4 ) SELECT [SomeChildTableId] ,D.[SomeParentTableId] ,SomeData1 ,SomeData2 ,SomeData3 ,SomeData4 FROM [dbo].[SomeChildTable] D INNER JOIN dbo.SomeParentTable_Staging I ON D.SomeParentTableID = I.SomeParentTableID; The execution plan indicates that the tables are being joined with a Nested Loop. When I run just the select portion of the query without the insert, the join is performed with Hash Match. So the select statement is the same, but in the context of an insert it uses the slower nested loop. I have added non-clustered index on the D.SomeParentTableID so that there is an index on both sides of the join. I.SomeParentTableID is a primary key with clustered index. Why does it use a nested loop for inserts that use a join? Is there a way to improve the performance of the join for the insert?

    Read the article

  • C# KeyEvent doesn't log the enter/return key

    - by Pieter888
    Hey all, I've been making this login form in C# and I wanted to 'submit' all the data as soon as the user either clicks on submit or presses the enter/return key. I've been testing a bit with KeyEvents but nothing so far worked. void tbPassword_KeyPress(object sender, KeyPressEventArgs e) { MessageBox.Show(e.KeyChar.ToString()); } The above code was to test if the event even worked in the first place. It works perfectly, when I press 'd' it shows me 'd' when I press '8' it shows me '8' but pressing enter doesn't do anything. So I though this was because enter isn't really bound to a character but it did show backspace, it worked just fine so it got me confused about why it didn't register my enter key. So the question is: How do I log the enter/return key? and why doesn't it log the key press right right now like it should? note: I've put the event in a textbox tbPassword.KeyPress += new KeyPressEventHandler(tbPassword_KeyPress); So it fires when the enter button is pressed WHILE the textbox is selected (which is was the whole time of course) maybe that has something to do with the execution of the code.

    Read the article

  • Can shared memory be read and validated without mutexes?

    - by Bribles
    On Linux I'm using shmget and shmat to setup a shared memory segment that one process will write to and one or more processes will read from. The data that is being shared is a few megabytes in size and when updated is completely rewritten; it's never partially updated. I have my shared memory segment laid out as follows: ------------------------- | t0 | actual data | t1 | ------------------------- where t0 and t1 are copies of the time when the writer began its update (with enough precision such that successive updates are guaranteed to have differing times). The writer first writes to t1, then copies in the data, then writes to t0. The reader on the other hand reads t0, then the data, then t1. If the reader gets the same value for t0 and t1 then it considers the data consistent and valid, if not, it tries again. Does this procedure ensure that if the reader thinks the data is valid then it actually is? Do I need to worry about out-of-order execution (OOE)? If so, would the reader using memcpy to get the entire shared memory segment overcome the OOE issues on the reader side? (This assumes that memcpy performs it's copy linearly and ascending through the address space. Is that assumption valid?)

    Read the article

  • Python Error-Checking Standard Practice

    - by chaindriver
    Hi, I have a question regarding error checking in Python. Let's say I have a function that takes a file path as an input: def myFunction(filepath): infile = open(filepath) #etc etc... One possible precondition would be that the file should exist. There are a few possible ways to check for this precondition, and I'm just wondering what's the best way to do it. i) Check with an if-statement: if not os.path.exists(filepath): raise IOException('File does not exist: %s' % filepath) This is the way that I would usually do it, though the same IOException would be raised by Python if the file does not exist, even if I don't raise it. ii) Use assert to check for the precondition: assert os.path.exists(filepath), 'File does not exist: %s' % filepath Using asserts seems to be the "standard" way of checking for pre/postconditions, so I am tempted to use these. However, it is possible that these asserts are turned off when the -o flag is used during execution, which means that this check might potentially be turned off and that seems risky. iii) Don't handle the precondition at all This is because if filepath does not exist, there will be an exception generated anyway and the exception message is detailed enough for user to know that the file does not exist I'm just wondering which of the above is the standard practice that I should use for my codes.

    Read the article

  • SQL Server full text query across multiple tables - why so slow?

    - by Mikey Cee
    Hi. I'm trying to understand the performance of an SQL Server 2008 full-text query I am constructing. The following query, using a full-text index, returns the correct results immediately: SELECT O.ID, O.Name FROM dbo.EventOccurrence O WHERE FREETEXT(O.Name, 'query') ie, all EventOccurrences with the word 'query' in their name. And the following query, using a full-text index from a different table, also returns straight away: SELECT V.ID, V.Name FROM dbo.Venue V WHERE FREETEXT(V.Name, 'query') ie. all Venues with the word 'query' in their name. But if I try to join the tables and do both full-text queries at once, it 12 seconds to return: SELECT O.ID, O.Name FROM dbo.EventOccurrence O INNER JOIN dbo.Event E ON O.EventID = E.ID INNER JOIN dbo.Venue V ON E.VenueID = V.ID WHERE FREETEXT(E.Name, 'search') OR FREETEXT(V.Name, 'search') Here is the execution plan: http://uploadpad.com/files/query.PNG From my reading, I didn't think it was even possible to make a free text query across multiple tables in this way, so I'm not sure I am understanding this correctly. Note that if I remove the WHERE clause from this last query then it returns all results within a second, so it's definitely the full-text that is causing the issue here. Can someone explain (i) why this is so slow and (ii) if this is even supported / if I am even understanding this correctly. Thanks in advance for your help.

    Read the article

  • Query broke down and left me stranded in the woods

    - by user1290323
    I am trying to execute a query that deletes all files from the images table that do not exist in the filters tables. I am skipping 3,500 of the latest files in the database as to sort of "Trim" the table back to 3,500 + "X" amount of records in the filters table. The filters table holds markers for the file, as well as the file id used in the images table. The code will run on a cron job. My Code: $sql = mysql_query("SELECT * FROM `images` ORDER BY `id` DESC") or die(mysql_error()); while($row = mysql_fetch_array($sql)){ $id = $row['id']; $file = $row['url']; $getId = mysql_query("SELECT `id` FROM `filter` WHERE `img_id` = '".$id."'") or die(mysql_error()); if(mysql_num_rows($getId) == 0){ $IdQue[] = $id; $FileQue[] = $file; } } for($i=3500; $i<$x; $i++){ mysql_query("DELETE FROM `images` WHERE id='".$IdQue[$i]."' LIMIT 1") or die("line 18".mysql_error()); unlink($FileQue[$i]) or die("file Not deleted"); } echo ($i-3500)." files deleted."; Output: 0 files deleted. Database contents: images table: 10,000 rows filters table: 63 rows Amount of rows in filters table that contain an images table id: 63 Execution time of php script: 4 seconds +/- 0.5 second Relevant DB structure TABLE: images id url etc... TABLE: filter id img_id (CONTAINS ID FROM images table) etc...

    Read the article

  • JQuery submit form and output response or error message

    - by sergdev
    I want to submit form and show message about result. update_records initializes alert_message to error message. If success I expect that its value is changed. Than update_records outputs message. But the function always alerts "Error submitting form". What is wrong with this? The code follows: function update_records(form_name) { var options = { async: false, alert_message: "Error submitting form", success: function(message) { this.alert_message = message; } }; $('#' + form_name).ajaxSubmit(options); alert(options.alert_message); } I am newbie in Javascript/JSon/Jquery and I suspect that I misunderstand some basics of mentioned technologies. UPDATE: I specified "async:false" to make execution synchronous (Is it correct?) I also tried to insert delay between following two lines: $('#' + form_name).ajaxSubmit(options); pausecomp(1000); // inserted pause alert(options.alert_message); It also does not resolve the issue Function fo pousecomp follows: function pausecomp(millis) { var date = new Date(); var curDate = null; do { curDate = new Date(); } while(curDate-date < millis); }

    Read the article

  • How do I manage dependencies for automated builds on my build server?

    - by Tom Pickles
    I'm trying to implement continuous integration into our day to day workings. In our team, we're moving from just building our code in Visual Studio on our workstations and deploying, to using MSBuild.exe and automating on our build server (which is Jenkins) without the use of Visual Studio. We have external dependencies to references such as Automap in our projects. Because the automap (for example) dll isn't on the build server, the msbuild execution fails, for obvious reasons. There are other dll's which I need to be part of the build, I'm just using automap as an example. So what's the best way to get any dependencies onto the build server as part of the automated build? I've seen references to using a 'lib' folder, but I don't really understand where I should be putting it (in my project, filesystem, SVN ...?), and how the build server will get to it. I've also read that NuGet can do something with dependencies, but my build server isn't connected to the internet, and I don't understand how I can get my build to pull a NuGet package I may have created, and how it works together. Edit: I'm using subversion and we cannot use TeamCity as we would have to buy it and there's zero chance of funding.

    Read the article

  • Class initialization and synchronized class method

    - by nybon
    Hi there, In my application, there is a class like below: public class Client { public synchronized static print() { System.out.println("hello"); } static { doSomething(); // which will take some time to complete } } This class will be used in a multi thread environment, many threads may call the Client.print() method simultaneously. I wonder if there is any chance that thread-1 triggers the class initialization, and before the class initialization complete, thread-2 enters into print method and print out the "hello" string? I see this behavior in a production system (64 bit JVM + Windows 2008R2), however, I cannot reproduce this behavior with a simple program in any environments. In Java language spec, section 12.4.1 (http://java.sun.com/docs/books/jls/second_edition/html/execution.doc.html), it says: A class or interface type T will be initialized immediately before the first occurrence of any one of the following: T is a class and an instance of T is created. T is a class and a static method declared by T is invoked. A static field declared by T is assigned. A static field declared by T is used and the reference to the field is not a compile-time constant (§15.28). References to compile-time constants must be resolved at compile time to a copy of the compile-time constant value, so uses of such a field never cause initialization. According to this paragraph, the class initialization will take place before the invocation of the static method, however, it is not clear if the class initialization need to be completed before the invocation of the static method. JVM should mandate the completion of class initialization before entering its static method according to my intuition, and some of my experiment supports my guess. However, I did see the opposite behavior in another environment. Can someone shed me some light on this? Any help is appreciated, thanks.

    Read the article

  • NUnit integration programmatically with spring

    - by harkon
    Hi! I have a component based architecture framework designed and I use NUnit for isolated testing - okay so far. Now I want to enable integration tests. Therefore the tests use real implementations of the existing components. Each element of the component has a life cycle (init, start and stop) and I created a NUnit component. In the start section the Console runner of the NUnit will be executed. Okay - now if I have a test fixture class in my dlls in the execution path the runner exectues them - fine! But: And this is crucial! Each to be tested implementation exists so far in the process and I want to use this instances for testing. If I use NUnit runner in the current way each instance will be created twice - and above all: I have a spring container and a implementation registry. Via this registry I can get access to all instances in the processes. But how do I give the test fixture access to the existing registry? Good: I can start the component architecture framework in the startup of the nunit runner - but this is not what I want. My guide is the apache Cactus framework (with JUnit and tomcat, JBoss etc.) Can someone help? Thanks a lot! Check: http://cone.codeplex.com

    Read the article

  • reload parent from within iframe

    - by Lauren
    I can't seem to reload the parent page from within an iframe... I've looked around at similar questions' answers but nothing has worked so far. The iframe I'm working with is here http://www.avaline.com/ R3000_3 once you log in, then hit the "order sample" button, and then hit "here" where it says "Your Third Party Shipper Numbers (To enter one, click here.)". I tried using javascript statements window.top.location.reload(),window.parent.location.reload(),window.parent.location.href=window.parent.location.href but none of those worked in FF 3.6 so I didn't move on to the other browsers although I am shooting for a cross-browser solution. I put the one-line javascript statements inside setTimeout("statement",2000) so people could read the content of the iframe (You have updated your shipper number(s). The page should refresh automatically. If not, please refresh and return to "order sample.") before the redirect happens, but that shouldn't affect the execution of the statements... I wish I could test and debug the statements with the Firebug console from within the Iframe but there doesn't seem to be any great way to test this out.

    Read the article

  • LinqToSQL not updating database

    - by codegarten
    Hi. I created a database and dbml in visual studio 2010 using its wizards. Everything was working fine until i checked the tables data (also in visual studio server explorer) and none of my updates were there. using (var context = new CenasDataContext()) { context.Log = Console.Out; context.Cenas.InsertOnSubmit(new Cena() { id = 1}); context.SubmitChanges(); } This is the code i am using to update my database. At this point my database has one table with one field (PK) named ID. *INSERT INTO [dbo].Cenas VALUES (@p0) -- @p0: Input Int (Size = -1; Prec = 0; Scale = 0) [1] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.1* This is LOG from the execution (printed the context log into the console). The problem i'm having is that these updates are not persistent in the database. I mean that when i query my database (visual studio server explorer - new query) i see the table is empty, every time. I am using a SQL Server database file (.mdf).

    Read the article

  • Deep Zoom in Ajax - Possible? Any examples out there?

    - by Phil
    I have an idea to implement a deep zoom type interface hosted in a browser for sports training data (speed, distance, heart rate etc.) However, rather than images I actually want to zoom into a hierarchy of information. For example, the initial display would contain a grid of years - hover over 2008, for example, and spin the mouse wheel (or click) will zoom into that year but during the zoom I want 2008 to fade out and be replaced with a calendar of months. Again zoom into a month and the months are replaced with the months calendar, zoom into a day and you finally see a chart with the training data plotted on it. All the time only dates with actual data would be highlighted in some fashion. My question is whether this would even be possible and whether anyone has seen examples of this already. I'm imagining that most of the time the next level of information could be cached in the browser (in fact, because this is calendar-based, I can calculate most of that and cache the dates to be highlighted.) I could also zoom into an empty chart whilst an Ajax thread is fetching the data to display. I've never tried anything like this before and I'm especially interested in whether DHTML would be capable of this sort of zoom (I suspect not and I would have to resort to Silverlight) and whether the Ajax execution would be uninterrupted whilst the browser rendering thread is kept busy zooming.

    Read the article

  • How to display objects with dynamic fields in wpf data grid?

    - by Oliver Hanappi
    Hi! I want to display and edit some objects in a WPF data grid and I'm looking for a good way to do so. All objects I want to display have the same fields, but every execution the fields of my objects can differ. Here is a piece of the interface to illustrate what I mean: public interface IMyObject { IEnumerable<string> GetFieldNames(); IEnumerable<Type> GetFieldTypes(); object GetField(string name); void SetField(string name, object value); } How can I generate a data grid which displays this kind of objects? I thought of XAML generation to define the columns, but I'm still facing the problem of accessing the fields. I think I could realize this with value converters, another option would be to dynamically create a type which exposes the dynamic fields with properties. Are there any other ways and which should I favor? I'm keen on hearing your opinions. Best Regards, Oliver Hanappi

    Read the article

  • How to upload files to server using JSP/Servlet?

    - by Thang Pham
    How can I upload files to server using JSP/Servlet? I tried this: <form action="upload" method="post"> <input type="text" name="description" /> <input type="file" name="file" /> <input type="submit" /> </form> However, I only get the file name, not the file content. When I add enctype="multipart/form-data" to the <form>, then request.getParameter() returns null. During research I stumbled upon Apache Common FileUpload. I tried this: FileItemFactory factory = new DiskFileItemFactory(); ServletFileUpload upload = new ServletFileUpload(factory); List items = upload.parseRequest(request); // This line is where it died. Unfortunately, the servlet threw an exception without a clear message and cause. Here is the stacktrace: SEVERE: Servlet.service() for servlet UploadServlet threw exception javax.servlet.ServletException: Servlet execution threw an exception at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:313) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:637)

    Read the article

  • get value from css using document.getElementById().style.height javascript

    - by Jamex
    Hi, Please offer insight into this mystery. I am trying to get the height value from a div box by var high = document.getElementById("hintdiv").style.height; alert(high); I can get this value just fine if the attribute is contained within the div tag, but it returns a blank value if the attribute is defined in the css section. This is fine, it shows 100px as a value. The value can be accessed. <div id="hintdiv" style="height:100px; display: none;"> . . var high = document.getElementById("hintdiv").style.height; alert(high); This is not fine, it shows an empty alert screen. The value is practically 0. #hintdiv { height:100px display: none; } <div id="hintdiv"> . . var high = document.getElementById("hintdiv").style.height; alert(high); But I have no problem accessing/changing the "display:none" attribute whether it is in the tag or in the css section. The div box displays correctly by both attribute definition methods (inside the tag or in the css section). I also tried to access the value by other variations, but no luck document.getElementById("hintdiv").style.height.value ----> undefined document.getElementById("hintdiv").height ---->undefined document.getElementById("hintdiv").height.value ----> error, no execution Any solution? TIA.

    Read the article

  • Javascript and rendering pauses and stays paused on scroll in the android browser

    - by user357303
    Hi. I've found some wierd behaviour related to scrolling and rendering and javascript. How to make it happen: On any webpage that is long enough to scroll on. Start to scroll pretty fast (fling the page). then release the touch. No while the page is still scrolling because of the momentum. Tap the screen to stop the scroll. This make the browser enter a wierd mode. On the nexus one it behaves like this: The updating of what's shown on the screen stops, you can still click on links and the go to where they are supposed to but what's shown on the screen stays the same. If you then scroll the screen a bit the update of the screen kicks in again and what you you where suppsed to see all the time is shown. On all phones with HTC Sense I've tried (Hero, Desire, Legend) this happens: The updating of the screen is stopped just like on the nexus one, but also the execution of any javascript is stopped. If you click on a link that takes you to another page however things return to normal again. The way I tested this was I created a page like this: http://pastebin.ca/1881620 The changeColor function simply changed the background color of 'container' to a few different colors. So before the error what happens is that when you click any link the color changes. after the error this happens: Nexus one: when you click on the links nothing happens (except the "orange link selected rounded corner box thing" is shown as if the link is clicked). Then when you scroll abit. You can see the color has changed (and equal amount of times to the number of times I clicked the link). On Sense: The links take me to google.com Has anyone else noticed this problem? Is there anyway to work around it? Thanks.

    Read the article

  • Sourcing a shell script, while running with sudo

    - by WishCow
    I would like to write a shell script that sets up a mercurial repository, and allow all users in the group "developers" to execute this script. The script is owned by the user "hg", and works fine when ran. The problem comes when I try to run it with another user, using sudo, the execution halts with a "permission denied" error, when it tries to source another file. The script file in question: create_repo.sh #!/bin/bash source colors.sh REPOROOT="/srv/repository/mercurial/" ... rest of the script .... Permissions of create_repo.sh, and colors.sh: -rwxr--r-- 1 hg hg 551 2011-01-07 10:20 colors.sh -rwxr--r-- 1 hg hg 1137 2011-01-07 11:08 create_repo.sh Sudoers setup: %developer ALL = (hg) NOPASSWD: /home/hg/scripts/create_repo.sh What I'm trying to run: user@nebu:~$ id uid=1000(user) gid=1000(user) groups=4(adm),20(dialout),24(cdrom),46(plugdev),105(lpadmin),113(sambashare),116(admin),1000(user),1001(developer) user@nebu:~$ sudo -l Matching Defaults entries for user on this host: env_reset User user may run the following commands on this host: (ALL) ALL (hg) NOPASSWD: /home/hg/scripts/create_repo.sh user@nebu:~$ sudo -u hg /home/hg/scripts/create_repo.sh /home/hg/scripts/create_repo.sh: line 3: colors.sh: Permission denied So the script is executed, but halts when it tries to include the other script. I have also tried using: user@nebu:~$ sudo -u hg /bin/bash /home/hg/scripts/create_repo.sh Which gives the same result. What is the correct way to include another shell script, if the script may be ran with a different user, through sudo?

    Read the article

  • Beginner Question: For extract a large subset of a table from MySQL, how does Indexing, order of tab

    - by chongman
    Sorry if this is too simple, but thanks in advance for helping. This is for MySQL but might be relevant for other RDMBSs tblA has 4 columns: colA, colB, colC, mydata, A_id It has about 10^9 records, with 10^3 distinct values for colA, colB, colC. tblB has 3 columns: colA, colB, B_id It has about 10^4 records. I want all the records from tblA (except the A_id) that have a match in tblB. In other words, I want to use tblB to describe the subset that I want to extract and then extract those records from tblA. Namely: SELECT a.colA, a.colB, a.colC, a.mydata FROM tblA as a INNER JOIN tblB as b ON a.colA=b.colA a.colB=b.colB ; It's taking a really long time (more than an hour) on a newish computer (4GB, Core2Quad, ubuntu), and I just want to check my understanding of the following optimization steps. ** Suppose this is the only query I will ever run on these tables. So ignore the need to run other queries. Now my questions: 1) What indexes should I create to optimize this query? I think I just need a multiple index on (colA, colB) for both tables. I don't think I need separate indexes for colA and colB. Another stack overflow article (that I can't find) mentioned that when adding new indexes, it is slower when there are existing indexes, so that might be a reason to use the multiple index. 2) Is INNER JOIN correct? I just want results where a match is found. 3) Is it faster if I join (tblA to tblB) or the other way around, (tblB to tblA)? This previous answer says that the optimizer should take care of that. 4) Does the order of the part after ON matter? This previous answer say that the optimizer also takes care of the execution order.

    Read the article

  • How to Resolve a Transformation Service with BRE that occurs after an Orchestration in an Itinerary?

    - by Maxime Labelle
    In trying to implement simple integration patterns with Biztalk ESB Toolkit 2.0, I'm facing a problem trying to resolve a Transformation Itinerary Service that occurs after an Orchestration. I'm using the BRE Resolver to execute rules that need to inspect the Context Message Type property to determine the appropriate map to use. However, once the message reaches the step in the Itinerary associated with the Transformation Service, the map fails to execute. From careful investigation, it appears that the message type is not supplied to the "Resolution" object that is used internally by the BRE resolver. Indeed, since the message leaving the preceding Orchestration is typed System.Xml.XmlDocument, the type of the message is "demoted" from the context. By tracking rules engine execution, I can observe that the type of the message is indeed lost when reaching the BRE resolver. The type of the message is empty, whereas the strongly-typed of the document is Microsoft.XLANGs.BaseTypes.Any. The Orchestration service that I use is taken straight from the samples that ship with ESB Toolkit 2.0. Is there a way to perform Context-Based BRE resolution after an Orchestration in an Itinerary?

    Read the article

  • SQL Server Long Query

    - by thormj
    Ok... I don't understand why this query is taking so long (MSSQL Server 2005): [Typical output 3K rows, 5.5 minute execution time] SELECT dbo.Point.PointDriverID, dbo.Point.AssetID, dbo.Point.PointID, dbo.Point.PointTypeID, dbo.Point.PointName, dbo.Point.ForeignID, dbo.Pointtype.TrendInterval, coalesce(dbo.Point.trendpts,5) AS TrendPts, LastTimeStamp = PointDTTM, LastValue=PointValue, Timezone FROM dbo.Point LEFT JOIN dbo.PointType ON dbo.PointType.PointTypeID = dbo.Point.PointTypeID LEFT JOIN dbo.PointData ON dbo.Point.PointID = dbo.PointData.PointID AND PointDTTM = (SELECT Max(PointDTTM) FROM dbo.PointData WHERE PointData.PointID = Point.PointID) LEFT JOIN dbo.SiteAsset ON dbo.SiteAsset.AssetID = dbo.Point.AssetID LEFT JOIN dbo.Site ON dbo.Site.SiteID = dbo.SiteAsset.SiteID WHERE onlinetrended =1 and WantTrend=1 PointData is the biggun, but I thought its definition should allow me to pick up what I want easily enough: CREATE TABLE [dbo].[PointData]( [PointID] [int] NOT NULL, [PointDTTM] [datetime] NOT NULL, [PointValue] [real] NULL, [DataQuality] [tinyint] NULL, CONSTRAINT [PK_PointData_1] PRIMARY KEY CLUSTERED ( [PointID] ASC, [PointDTTM] ASC ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO CREATE NONCLUSTERED INDEX [IX_PointDataDesc] ON [dbo].[PointData] ( [PointID] ASC, [PointDTTM] DESC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO PointData is 550M rows, and Point (source of PointID) is only 28K rows. I tried making an Indexed View, but I can't figure out how to get the Last Timestamp/Value out of it in a compatible way (no Max, no subquery, no CTE). This runs twice an hour, and after it runs I put more data into those 3K PointID's that I selected. I thought about creating LastTime/LastValue tables directly into Point, but that seems like the wrong approach. Am I missing something, or should I rebuild something? (I'm also the DBA, but I know very little about A'ing a DB!)

    Read the article

  • SQL Server service broker reporting as off when I have written a query to turn it on

    - by dotnetdev
    I have made a small ASP.NET website. It uses sqlcachedependency The SQL Server Service Broker for the current database is not enabled, and as a result query notifications are not supported. Please enable the Service Broker for this database if you wish to use notifications. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: The SQL Server Service Broker for the current database is not enabled, and as a result query notifications are not supported. Please enable the Service Broker for this database if you wish to use notifications. Source Error: Line 12: System.Data.SqlClient.SqlDependency.Start(connString); This is the erroneous line in my global.asax. However, in sql server (2005), I enabled service broker like so (I connect and run the SQL Server service when I debug my site): ALTER DATABASE mynewdatabase SET ENABLE_BROKER with rollback immediate And this was successful. What am I missing? I am trying to use sql caching dependency and have followed all procedures. Thanks

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >