Search Results

Search found 3071 results on 123 pages for 'executing'.

Page 93/123 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • PHP jQuery Long Polling Chat Application

    - by Tejas Jadhav
    I've made a simple PHP jQuery Chat Application with Short Polling (AJAX Refresh). Like, every 2 - 3 seconds it asks for new messages. But, I read that Long Polling is a better approach for Chat applications. So, I went through some Long Polling scripts. I made like this: Javascript: $("#submit").click(function(){ $.ajax({ url: 'chat-handler.php', dataType: 'json', data: {action : 'read', message : 'message'} }); }); var getNewMessage = function() { $.ajax({ url: 'chat-handler.php', dataType: 'json', data: {action : 'read', message : 'message'}, function(data){ alert(data); } }); getNewMessage(); } $(document).ready(getNewMessage); PHP <?php $time = time(); while ((time() - $time) < 25) { $data = $db->getNewMessage (); if (!empty ($data)) { echo json_encode ($data); break; } usleep(1000000); // 1 Second } ?> The problem is, once getNewMessage() starts, it executes unless it gets some response (from chat-handler.php). It executes recursively. But if someone wants to send a message in between, then actually that function ($("#submit").click()) never executes as getNewMessage() is still executing. So is there any workaround?

    Read the article

  • PageMethod with Custom Handler

    - by Mister Cook
    I have a page based web method which works fine, using [WebMethod], i.e [WebMethod] public static void DoSomethingOnTheServer() { } I also have a custom page handler to make SEO friendly URLs possible, e.g http://localhost/MySite/A_nice_url.ext = http://localhost/MySite/page.aspx?id=1 I am using Server.Transfer in my ProcessRequest handler to achieve this. This all works fine. Also, my page method works fine when the user requests the URL such as: http://localhost/MySite/page.aspx?id=1 However, when the user requests the custom handled URL, i.e. http://localhost/MySite/A_nice_url.ext The client reports that the PageMethod has been successfully completed, but it has not been called at all. I'm guessing it has something to do with my custom handler, so I have modified it to include PathInfo: public void ProcessRequest(HttpContext context) { // ... (code to determine id) ... // Transfer to the required page string actualPath = "~/page.aspx" + context.Request.PathInfo + "?id=" + determinedId; context.Server.Transfer(actualPath); } So now if a PageMethod is called, it will result in: http://localhost/MySite/page.aspx/DoSomethingOnTheServer?id=1 I'm wondering if this is the correct syntax for calling a PageMethod. When I try Server.Transfer reports: Error executing child request for /MySite/page.aspx/DoSomethingOnTheServer I've experimented with HttpContext.RewritePath but not sure how to actually make it perform the request. This does not seem to work, any ideas?

    Read the article

  • Optimizing MySQL update query

    - by Jernej Jerin
    This is currently my MySQL UPDATE query, which is called from program written in Java: String query = "UPDATE maxday SET DatePressureREL = (SELECT Date FROM ws3600 WHERE PressureREL = (SELECT MAX" + "(PressureREL) FROM ws3600 WHERE Date >= '" + Date + "') AND Date >= '" + Date + "' ORDER BY Date DESC LIMIT 1), " + "PressureREL = (SELECT PressureREL FROM ws3600 WHERE PressureREL = (SELECT MAX(PressureREL) FROM ws3600 " + "WHERE Date >= '" + Date + "') AND Date >= '" + Date + "' ORDER BY Date DESC LIMIT 1), ..."; try { s.execute(query); } catch (SQLException e) { System.out.println("SQL error"); } catch(Exception e) { e.printStackTrace(); } Let me explain first, what does it do. I have two tables, first is ws3600, which holds columns (Date, PressureREL, TemperatureOUT, Dewpoint, ...). Then I have second table, called maxday, which holds columns like DatePressureREL, PressureREL, DateTemperatureOUT, TemperatureOUT,... Now as you can see from an example, I update each column, the question is, is there a faster way? I am asking this, because I am calling MAX twice, first to find the Date for that value and secondly to find the actual value. Now I know that I could write like that: SELECT Date, PressureREL FROM ws3600 WHERE PressureREL = (SELECT MAX(PressureREL) FROM ws3600 WHERE Date >= '" + Date + "') AND Date >= '" + Date + "' ORDER BY Date DESC LIMIT 1 That way I get the Date of the max and the max value at the same time and then update with those values the data in maxday table. But the problem of this solution is, that I have to execute many queries, which as I understand takes alot more time compared to executing one long mysql query because of overhead in sending each query to the server. If there is no better way, which solution beetwen this two should I choose. The first, which only takes one query but is very unoptimized or the second which is beter in terms of optimization, but needs alot more queries which probably means that the preformance gain is lost because of overhead in sending each query to the server?

    Read the article

  • How to set SQL_BIG_SELECTS = 1 from VB(legacy ASP) with ADODB environment?

    - by conecon
    I encountered The SELECT would examine more than MAX_JOIN_SIZE rows; check your WHERE and use SET SQL_BIG_SELECTS=1 or SET SQL_MAX_JOIN_SIZE=# if the SELECT is okay error with my ASP code. ASP code has server side ADODB connection with MySQL and connection seems not be able to execute multiple query. How to implement SQL_BIG_SELECTS = 1 in my code? Set obj_db = Server.CreateObject("ADODB.Connection") Session("ConnectionString") = "dsn=dsn1016189_mysql;uid=apns;pwd=mypassword;DATABASE=mydb;APP=ASP Script;STMT=SET CHARACTER SET SJIS" obj_db.Open Session("ConnectionString") Set obj_ret = Server.CreateObject("ADODB.Recordset") obj_ret.CursorLocation = 3 and executing SQL... SQL_BIG_SELECTS = 1; SELECT pu.login_id, pu.p_login_id, pu.first_name, pu.last_name, pu.sex, pu.is_admin, pu.attendance, pu.invited, pu.reason, qaa1.answer AS qaa1_answer, COUNT(pu2.p_login_id) AS companion FROM party_user pu LEFT OUTER JOIN party_user pu2 ON pu2.p_login_id = pu.login_id LEFT OUTER JOIN qa_answer qaa1 ON qaa1.login_id = pu.login_id AND qaa1.party_id = pu.party_id AND qaa1.sort_num = '1' WHERE pu.party_id = '92' AND pu.p_login_id = '' GROUP BY pu.login_id, pu.p_login_id, pu.first_name, pu.last_name, pu.sex, pu.is_admin, pu.attendance, pu.reason, qaa1.answer, pu.invited ORDER BY pu.login_id ASC; I can't execute multiple query and above query become error. You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT pu.login_id, pu.p_login_id, pu.first_name, pu.last_name, pu.sex, pu.is_ad' at line 1

    Read the article

  • Self-referencing tables in Linq2Sql

    - by J-Man
    Hi, I've seen a lot of questions on self-referencing tables in Linq2Sql and how to eagerly load all child records for a particular root object. I've implemented a temporary solution by accessing all underlying properties, but you can see that this doesn't do the performance any good. The thing is though, that all records are correlated with each-other using a correlation GUID. Example below: RootElement - Id: 1 - ParentId: null - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD ChildElement1 - Id: 2 - ParentId: 1 - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD ChildElement2 - Id: 3 - ParentId: 2 - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD ChildElement1 - Id: 4 - ParentId: 2 - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD In my case, I do have access to the correlationId, so I can retrieve all of my records by performing the following query: from element in db.Elements where element.CorrelationId == '4D68E512-4B55-44f4-BA5A-174B630A03DD' select element; But, of course, I want these elements associated with each other by executing this query: from element in db.Elements where element.CorrelationId == '4D68E512-4B55-44f4-BA5A-174B630A03DD' && element.ParentId == null select element; My question is: is it possible to combine the results the first query as some sort of 'caching mechanism' for the query where I get the root element? Thanks for the input. J.

    Read the article

  • Hot to make COM ActiveX object work in IE 64 bit?

    - by Kurtevich
    Hi! I have a COM object embeded in ASP.NET page using <object classid="clsid:XXX...">. It works in IE 32 bit, but does not work in IE 64 bit - can't access its functions. There are no error messages, no event logs where I can get some information. The dll is in C#, includes COM visible class, compiled for Any CPU (though I also tried x86), and registered during client installation by executing regasm. This creates registry keys, well everything works fine except for IE 64. I searched internet about the issue or at least some guidlines and didn't find anything. I received an answer on another forum, something about _MERGE_PROXYSTUB (I guess it's preprocessor definition?) and ProxyStubClsid32 registry key, but not very detailed. Well, I searched again, didn't find much, and experimented: rebuilt with _MERGE_PROXYSTUB defined, created ProxyStubClsid32 keys everywhere, but with no result. What can be at least possible solutions or points to look at? Maybe there is a way at least to get the logs about why IE 64 can't access it?

    Read the article

  • Modifying UINavigation Bar Buttons in SubViews

    - by james
    I'm having trouble trying to modify the navigation bar in the subview portion of my application. self.navigationItem.rightBarButtonItem = [[[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemDone target:self action:@selector(add_Clicked:)] autorelease]; I have no issues modifying the navigation bar in any of my UIViewControllers classes. The simplified application class outline is as such: AppDelegate - UIViewControllerA (has a left and a right navigationBarButton) - Subview is displayed when a SegmentControl is selected. (Within the subview, I'm trying to modify the right NavigationBarButton that is displayed) [self.view addSubview:newControllerName.view]; Methods I have attempted: Trying to set self.navigationItem.rightBarButtonItem within my subview to a new UIBarButtonItem. Creating a pointer to UIViewControllerA within my AppDelegate. The UIViewControllerA contains a function setNavButton I wrote to set the rightBarButtonItem to a button. Then I am referencing the AppDelegate's reference to UIViewControllerA and attempting to call setNavButton. I included a NSLog call to see if that function is being called and it is executing but the NavigationBar isn't being modified. I'm trying to avoid having to push a UIViewController after the SegmentControl is clicked in UIViewControllerA so that I can simulate the SegmentControl as tabs. I'm not getting any errors during compile or run time. Anyone have any ideas?

    Read the article

  • eventmachine and external scripts via backticks

    - by Maciek
    I have a small HTTP server script I've written using eventmachine which needs to call external scripts/commands and does so via backticks (``). When serving up requests which don't run backticked code, everything is fine, however, as soon as my EM code executes any backticked external script, it stops serving requests and stops executing in general. I noticed eventmachine seems to be sensitive to sub-processes and/or threads, and appears to have the popen method for this purpose, but EM's source warns that this method doesn't work under Windows. Many of the machines running this script are running Windows, so I can't use popen. Am I out of luck here? Is there a safe way to run an external command from an eventmachine script under Windows? Is there any way I could fire off some commands to be run externally without blocking EM's execution? edit: the culprit that seems to be screwing up EM the most is my usage of the Windows start command, as in: start java myclass. The reason I'm using start is because I want those external scripts to start running and keep running after the EM request is served

    Read the article

  • stack overflow on XMLListCollection collectionEvent

    - by reidLinden
    I'm working on a Flex 3 project, and I'm using a pair of XMLListCollection(s) to manage a combobox and a data grid. The combobox piece is working perfectly. The XMLListCollection for this is static. The user picks an item, and, on "change", it fires off an addItem() to the second collection. The second collection's datagrid then displays the updated list, and all is well. The datagrid, however, is editable. A further complication is that I have another event handler bound to the second XMLLIstCollection's "change" event, and in that handler, I do make additional changes to the second list. This essentially causes an infinite loop (a stack overflow :D ), of the second lists "change" handler. I'm not really sure how to handle this. Searching has brought up an idea or two regarding AutoUpdate functionality, but I wasn't able to get much out of them. In particular, the behavior persists, executing the 'updates' as soon as I re-enable, so I imagine I may be doing it wrong. I want the update to run, in general, just not DURING that code block. Thanks for your help!

    Read the article

  • How do i refactor this code by using Action<t> or Func<t> delegates

    - by user330612
    I have a sample program, which needs to execute 3 methods in a particular order. And after executing each method, should do error handling. Now i did this in a normal fashion, w/o using delegates like this. class Program { public static void Main() { MyTest(); } private static bool MyTest() { bool result = true; int m = 2; int temp = 0; try { temp = Function1(m); } catch (Exception e) { Console.WriteLine("Caught exception for function1" + e.Message); result = false; } try { Function2(temp); } catch (Exception e) { Console.WriteLine("Caught exception for function2" + e.Message); result = false; } try { Function3(temp); } catch (Exception e) { Console.WriteLine("Caught exception for function3" + e.Message); result = false; } return result; } public static int Function1(int x) { Console.WriteLine("Sum is calculated"); return x + x; } public static int Function2(int x) { Console.WriteLine("Difference is calculated "); return (x - x); } public static int Function3(int x) { return x * x; } } As you can see, this code looks ugly w/ so many try catch loops, which are all doing the same thing...so i decided that i can use delegates to refactor this code so that Try Catch can be all shoved into one method so that it looks neat. I was looking at some examples online and couldnt figure our if i shud use Action or Func delegates for this. Both look similar but im unable to get a clear idea how to implement this. Any help is gr8ly appreciated. I'm using .NET 4.0, so im allowed to use anonymous methods n lambda expressions also for this Thanks

    Read the article

  • Dropping all user tables/sequences in Oracle

    - by Ambience
    As part of our build process and evolving database, I'm trying to create a script which will remove all of the tables and sequences for a user. I don't want to do recreate the user as this will require more permissions than allowed. My script creates a procedure to drop the tables/sequences, executes the procedure, and then drops the procedure. I'm executing the file from sqlplus: drop.sql: create or replace procedure drop_all_cdi_tables is cur integer; begin cur:= dbms_sql.OPEN_CURSOR(); for t in (select table_name from user_tables) loop execute immediate 'drop table ' ||t.table_name|| ' cascade constraints'; end loop; dbms_sql.close_cursor(cur); cur:= dbms_sql.OPEN_CURSOR(); for t in (select sequence_name from user_sequences) loop execute immediate 'drop sequence ' ||t.sequence_name; end loop; dbms_sql.close_cursor(cur); end; / execute drop_all_cdi_tables; / drop procedure drop_all_cdi_tables; / Unfortunately, dropping the procedure causes a problem. There seems to cause a race condition and the procedure is dropped before it executes. E.g.: SQL*Plus: Release 11.1.0.7.0 - Production on Tue Mar 30 18:45:42 2010 Copyright (c) 1982, 2008, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Procedure created. PL/SQL procedure successfully completed. Procedure created. Procedure dropped. drop procedure drop_all_user_tables * ERROR at line 1: ORA-04043: object DROP_ALL_USER_TABLES does not exist SQL Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64 With the Partitioning, OLAP, Data Mining and Real Application Testing options Any ideas on how to get this working?

    Read the article

  • Ruby: what is the pitfall in this simple code excerpt that tests variable existence

    - by zipizap
    I'm starting with Ruby, and while making some test samples, I've stumbled against an error in the code that I don't understand why it happens. The code pretends to tests if a variable finn is defined?() and if it is defined, then it increments it. If it isn't defined, then it will define it with value 0 (zero). As the code threw an error, I started to decompose it in small pieces and run it, to better trace where the error was comming from. The code was run in IRB irb 0.9.5(05/04/13), using ruby 1.9.1p378 First I certify that the variable finn is not yet defined, and all is ok: ?> finn NameError: undefined local variable or method `finn' for main:Object from (irb):134 from /home/paulo/.rvm/rubies/ruby-1.9.1-p378/bin/irb:15:in `<main>' >> Then I certify that the following inline-condition executes as expected, and all is ok: ?> ((defined?(finn)) ? (finn+1):(0)) => 0 And now comes the code that throws the error: ?> finn=((defined?(finn)) ? (finn+1):(0)) NoMethodError: undefined method `+' for nil:NilClass from (irb):143 from /home/paulo/.rvm/rubies/ruby-1.9.1-p378/bin/irb:15:in `<main>' I was expecting that the code would not throw any error, and that after executing the variable finn would be defined with a first value of 0 (zero). But instead, the code thows the error, and finn get defined but with a value of nil. >> finn => nil Where might the error come from?!? Why does the inline-condition work alone, but not when used for the finn assignment? Any help apreciated :)

    Read the article

  • Learning Hibernate: too many connections

    - by stivlo
    I'm trying to learn Hibernate and I wrote the simplest Person Entity and I was trying to insert 2000 of them. I know I'm using deprecated methods, I will try to figure out what are the new ones later. First, here is the class Person: @Entity public class Person { private int id; private String name; @Id @GeneratedValue(strategy = GenerationType.TABLE, generator = "person") @TableGenerator(name = "person", table = "sequences", allocationSize = 1) public int getId() { return id; } public void setId(int id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } } Then I wrote a small App class that insert 2000 entities with a loop: public class App { private static AnnotationConfiguration config; public static void insertPerson() { SessionFactory factory = config.buildSessionFactory(); Session session = factory.getCurrentSession(); session.beginTransaction(); Person aPerson = new Person(); aPerson.setName("John"); session.save(aPerson); session.getTransaction().commit(); } public static void main(String[] args) { config = new AnnotationConfiguration(); config.addAnnotatedClass(Person.class); config.configure("hibernate.cfg.xml"); //is the default already new SchemaExport(config).create(true, true); //print and execute for (int i = 0; i < 2000; i++) { insertPerson(); } } } What I get after a while is: Exception in thread "main" org.hibernate.exception.JDBCConnectionException: Cannot open connection Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Too many connections Now I know that probably if I put the transaction outside the loop it would work, but mine was a test to see what happens when executing multiple transactions. And since there is only one open at each time, it should work. I tried to add session.close() after the commit, but I got Exception in thread "main" org.hibernate.SessionException: Session was already closed So how to solve the problem?

    Read the article

  • Get an Arduino and Android phone to communicate over the web

    - by Saleem
    I am writing an Android application to communicate with my Arduino over the web. The Arduino is running a web server through an Ethernet shield. I am attaching my code, but I will explain it here so you will understand what I am trying to do. The Android sends an HTTP request in the format http://192.168.1.148/?Lights=1. The Arduino gets the request, executes the command (in this case turning on some lights) and then responds to the Android device by simply sending the string "Lights=On". The Android will then change the color of the button to notify the user that the command was executed successfully. The Arduino is getting the instruction and executing it and sending the response but my button color is not changing. I know that the Android device is getting the string because I added a debug line to change the text on the button to the received response. The relevant code for the Android device is: ((Button) v).setText(sb.toString()); //This works and the button text changes to "Lights=On". //Test response and update button if(sb.toString()=="Lights=On"){ v.getBackground().setColorFilter(0xFFFFFF00, PorterDuff.Mode.MULTIPLY); Drawable d = lightOff.getBackground(); lightOff.invalidateDrawable(d); d.clearColorFilter(); } The Arduino code is: if(s=="Lights"){ switch(client.read()){ case '0': digitalWrite(LightPin,0); client.print("Lights=Off"); //debug Serial.println("Lights=Off"); break; case '1': digitalWrite(LightPin,1); client.print("Lights=On"); Serial.println("Lights=On"); break; } } Please let me know if you need more of the code to answer this question.

    Read the article

  • ETL , Esper or Drools?

    - by geoaxis
    Hello, The question environment relates to JavaEE, Spring I am developing a system which can start and stop arbitrary TCP (or other) listeners for incoming messages. There could be a need to authenticate these messages. These messages need to be parsed and stored in some other entities. These entities model which fields they store. So for example if I have property1 that can have two text fields FillLevel1 and FillLevel2, I could receive messages on TCP which have both fill levels specified in text as F1=100;F2=90 Later I could add another filed say FillLevel3 when I start receiving messages F1=xx;F2=xx;F3=xx. But this is a conscious decision on the part of system modeler. My question is what do you think is better to use for parsing and storing the message. ETL (using Pantaho, which is used in other system) where you store the raw message and use task executor to consume them one by one and store the transformed messages as per your rules. One could use Espr or Drools to do the same thing , storing rules and executing them with timer, but I am not sure how dynamic you could get with making rules (they have to be made by end user in a running system and preferably in most user friendly way, ie no scripts or code, only GUI) The end user should be capable of changing the parse rules. It is also possible that end user might want to change the archived data as well (for example in the above example if a new value of FillLevel is added, one would like to put a FillLevel=-99 in the previous values to make the data consistent). Please ask for explanations, I have the feeling that I need to revise this question a bit. Thanks

    Read the article

  • How to avoid chaotic ASP.NET web application deployment?

    - by emzero
    Ok, so here's the thing. I'm developing an existing (it started being an ASP classic app, so you can imagine :P) web application under ASP.NET 4.0 and SQLServer 2005. We are 4 developers using local instances of SQL Server 2005 Express, having the source-code and the Visual Studio database project This webapp has several "universes" (that's how we call it). Every universe has its own database (currently on the same server) but they all share the same schema (tables, sprocs, etc) and the same source/site code. So manually deploying is really annoying, because I have to deploy the source code and then run the sql scripts manually on each database. I know that manual deploying can cause problems, so I'm looking for a way of automating it. We've recently created a Visual Studio Database Project to manage the schema and generate the diff-schema scripts with different targets. I don't have idea how to put the pieces together I would like to: Have a way to make a "sync" deploy to a target server (thanksfully I have full RDC access to the servers so I can install things if required). With "sync" deploy I mean that I don't want to fully deploy the whole application, because it has lots of files and I just want to deploy those new or changed. Generate diff-sql update scripts for every database target and combine it to just 1 script. For this I should have some list of the databases names somewhere. Copy the site files and executing the generated sql script in an easy and automated way. I've read about MSBuild, MS WebDeploy, NAnt, etc. But I don't really know where to start and I really want to get rid of this manual deploy. If there is a better and easier way of doing it than what I enumerated, I'll be pleased to read your option. I know this is not a very specific question but I've googled a lot about it and it seems I cannot figure out how to do it. I've never used any automation tool to deploy. Any help will be really appreciated, Thank you all, Regards

    Read the article

  • BASH if conditions

    - by Daniil
    Hi, I did ask a question before. The answer made sense, but I could never get it to work. And now I gotta get it working. But I cannot figure out BASH's if statements. What am I doing wrong below: START_TIME=9 STOP_TIME=17 HOUR=$((`date +"%k"`)) if [[ "$HOUR" -ge "9" ]] && [[ "$HOUR" -le "17" ]] && [[ "$2" != "-force" ]] ; then echo "Cannot run this script without -force at this time" exit 1 fi The idea is that I don't want this script to continue executing, unless forced to, during hours of 9am to 5pm. But it will always evaluate the condition to true and thus won't allow me to run the script. ./script.sh [action] (-force) Thx Edit: The output of set -x: $ ./test2.sh restart + START_TIME=9 + STOP_TIME=17 ++ date +%k + HOUR=11 + [[ 11 -ge 9 ]] + [[ 11 -le 17 ]] + [[ '' != \-\f\o\r\c\e ]] + echo 'Cannot run this script without -force at this time' Cannot run this script without -force at this time + exit 1 and then with -force $ ./test2.sh restart -force + START_TIME=9 + STOP_TIME=17 ++ date +%k + HOUR=11 + [[ 11 -ge 9 ]] + [[ 11 -le 17 ]] + [[ '' != \-\f\o\r\c\e ]] + echo 'Cannot run this script without -force at this time' Cannot run this script without -force at this time + exit 1

    Read the article

  • Access problems with System.Diagnostics.Process in webservice

    - by Martin
    Hello everyone. I have problems with executing a process in a webservice method on a Windows Server 2003 machine. Here is the code: Dim target As String = "C:\dnscmd.exe" Dim fileInfo As System.IO.FileInfo = New System.IO.FileInfo(target) If Not fileInfo.Exists Then Throw New System.IO.FileNotFoundException("The requested file was not found: " + fileInfo.FullName) End If Dim startinfo As New System.Diagnostics.ProcessStartInfo("C:\dnscmd.exe") startinfo.UseShellExecute = False startinfo.Arguments = "\\MYCOMPUTER /recordadd mydomain.no " & dnsname & " CNAME myhost.mydomain.no" startinfo.UserName = "user1" Dim password As New SecureString() For Each c As Char In "secretpassword".ToCharArray() password.AppendChar(c) Next startinfo.Password = password Process.Start(startinfo) I know the process is being executed because i use processmonitor.exe on the server and it tells me that c:\dnscmd.exe is called with the right parameters Full command line value from procmon is: "C:\dnscmd.exe" \MYCOMPUTER /recordadd mydomain.no mysubdomain CNAME myhost.mydomain.no The user of the created process on the server is NT AUTHORITY\SYSTEM. BUT, the dns entry will not be added! Here is the weird thing: I KNOW the user (user1) I authenticate with has administrative rights (it's the same user I use to log on the machine with), but the user of the process in procmon says NT AUTHORITY\SYSTEM (which is not the user i use to authenticate). Weirdest: If i log on to the server, copy the command line value read from the procmon logging, and paste it in a command line window, it works! Reading procmon after this shows that user1 owns the dnscmd process. Why doesn't user1 become owner of the process started with system.diagnostics.process? Is the reason why the command doesn't work?

    Read the article

  • What is the best way to automatically transpose a LilyPond source file into multiple keys?

    - by Michael Steele
    problem I'm using LilyPond to typeset sheet music for a church choir to perform. Depending on who is available on any given week, songs will be played in various keys. We have an amazing pianist who can play anything we throw at her and the guitarists will typically pencil in alternate chords, but I want to make things easier by having beautifully typeset sheet music available in any key we want. So say we're going to sing our ABCs. First I'll take whatever source transcriptions available and enter it into a LilyPond script: melody = \relative c' { c c g g a a g2 f f e e d d c2 } I want the ability to transpose this automatically, so if I want the whole thing in 'G' I wrap the song in a \transpose call like so: melody = \transpose c g \relative c' { c c g g a a g2 f f e e d d c2 } What I really want is to substitute something for the 'g' and generate the output for melody multiple times. Simple LilyPond variables don't seem to work here, and so far I've been unsuccessful in defining a scheme function to do this. What I've resorted to for the moment is taking the above file, call it twinkle.ly and turning it into an M4 script called twinkle.ly.m4, the contents of which look like this: melody = \transpose c _key \relative c' { c c g g a a g2 f f e e d d c2 } I then compile the while thing by executing the following line: > m4 -D _key=g twinkle.ly.m4 > twinkle_g.ly && lilypond twinkle_g.ly I've written a Makefile to do this for me, defining rules for every song I have and every key I'm interested in. question There's got to be a better way of going about this. Given that Lilypond supports embedded scheme, I would prefer to not use a macro preprocessed on it. Has anybody else come up with a solution to this same problem?

    Read the article

  • jQuery Validation error...

    - by Povylas
    Hi, I have been struggling with this jQuery Validation Plugin. Here is the code: <script type="text/javascript"> $(function() { var validator = $('#signup').validate({ errorElement: 'span', rules: { username: { required: true, minlenght: 6 //remote: "check-username.php" }, password: { required: true, minlength: 5 }, confirm_password: { required: true, minlength: 5, equalTo: "#password" }, email: { required: true, email: true }, agree: "required" }, messages: { username: { required: "Please enter a username", minlength: "Your username must consist of at least 6 characters" //remote: "Somenoe have already chosen nick like this." }, password: { required: "Please provide a password", minlength: "Your password must be at least 5 characters long" }, confirm_password: { required: "Please provide a password", minlength: "Your password must be at least 5 characters long", equalTo: "Please enter the same password as above" }, email: "Please enter a valid email address", agree: "Please accept our policy" } }); var root = $("#wizard").scrollable({size: 1, clickable: false}); // some variables that we need var api = root.scrollable(); $("#data").click(function() { validator.form(); }); // validation logic is done inside the onBeforeSeek callback api.onBeforeSeek(function(event, i) { if($("#signup").valid() == false){ return false; }else{ return true; } $("#status li").removeClass("active").eq(i).addClass("active"); }); //if tab is pressed on the next button seek to next page root.find("button.next").keydown(function(e) { if (e.keyCode == 9) { // seeks to next tab by executing our validation routine api.next(); e.preventDefault(); } }); $('button.fin').click(function(){ parent.$.fn.fancybox.close() }); }); </script> And here is the error: $.validator.methods[method] is undefined http://www.vvv.vhost.lt/js/jquery-validate/jquery.validate.min.js Line 15 I am completely confused... Maybe some kind of handler is needed? I would be grateful for any kind of answer.

    Read the article

  • EF + UnitOfWork + SharePoint RunWithElevatedPrivileges

    - by Lorenzo
    In our SharePoint application we have used the UnitOfWork + Repository patterns together with Entity Framework. To avoid the usage of the passthrough authentication we have developed a piece of code that impersonate a single user before creating the ObjectContext instance in a similar way that is described in "Impersonating user with Entity Framework" on this site. The only difference between our code and the referred question is that, to do the impersonation, we are using RunWithElevatedPrivileges to impersonate the Application Pool identity as in the following sample. SPSecurity.RunWithElevatedPrivileges(delegate() { using (SPSite site = new SPSite(url)) { _context = new MyDataContext(ConfigSingleton.GetInstance().ConnectionString); } }); We have done this way because we expected that creating the ObjectContext after impersonation and, due to the fact that Repositories are receiving the impersonated ObjectContext would solve our requirement. Unfortunately it's not so easy. In fact we experienced that, even if the ObjectContext is created before and under impersonation circumstances, the real connection is made just before executing the query, and so does not use impersonation, which break our requirement. I have checked the ObjectContext class to see if there was any event through which we can inject the impersonation but unfortunately found nothing. Any help?

    Read the article

  • Makefile trickery using VPATH and include.

    - by roe
    Hi, I'm playing around with make files and the VPATH variable. Basically, I'm grabbing source files from a few different places (specified by the VPATH), and compile them into the current directory using simply a list of .o-files that I want. So far so good, now I'm generating dependency information into a file called '.depend' and including that. Gnumake will attempt to use the rules defined so far to create the included file if it doesn't exist, so that's ok. Basically, my makefile looks like this. VPATH=A/source:B/source:C/source objects=first.o second.o third.o executable: $(objects) .depend: $(objects:.o=.c) $(CC) -MM $^ > $@ include .depend Now for the real question, can I suppress the generation of the .depend file in any way? I'm currently working in a clearcase environment - sloooow, so I'd prefer to have it a bit more under control when to update the dependency information. It's more or less an academic exercise as I could just wrap the thing in a script which is touching the .depend file before executing make (thus making it more recent than any source file), but it'd interesting to know if I can somehow suppress it using 'pure' make. I cannot remove the dependency to the source files (i.e. using simply .depend:), as I'm depending on the $^ variable to do the VPATH resolution for me. If there'd be any way to only update dependencies as a result of updated #include directives, that'd be even better of course.. But I'm not holding my breath for that one.. :)

    Read the article

  • Objects in JavaScript defined and undefined at the same time (in a FireFox extension)

    - by Alexey Romanov
    I am chasing down a bug in a FireFox extension. I've finally managed to see it for myself (I've only had reports before) and I can't understand how what I saw is possible. One error message from my extension in the Error Console is "gBrowser is not defined". This by itself would be surprising enough, since the overlay is over browser.xul and navigator.xul, and I expect gBrowser to be available from both. Even worse is the actual place where it happens: line 101 of nextplease.js. That is, inside the function isTopLevelDocument, which is only called from onContentLoaded, which is only called from onLoad here: gBrowser.addEventListener(this.loadType, function (event) { nextplease.loadListener.onContentLoaded(event); }, true); So gBrowser is defined in onLoad, but somehow undefined in isTopLevelDocument. When I tried to actually use the extension, I got another error: "nextplease is not defined". The interesting thing is that it happened on lines 853 and 857. That is, inside the functions nextplease.getNextLink = function () { nextplease.getLink(window.content, nextplease.NextPhrasesMap, nextplease.NextImagesMap, nextplease.isNextRegExp, nextplease.NEXT_SEARCH_TYPE); } nextplease.getPrevLink = function () { nextplease.getLink(window.content, nextplease.PrevPhrasesMap, nextplease.PrevImagesMap, nextplease.isPrevRegExp, nextplease.PREV_SEARCH_TYPE); } So nextplease is somehow defined enough to call these functions, but isn't defined inside them. Finally, executing typeof(nextplease) in Execute JS returns "object". Same for gBrowser. How can this happen? Any ideas?

    Read the article

  • Building a News-feed that comprises posts "created by user's connections" && "on the topics user is following"

    - by aklin81
    I am working on a project of Questions & Answers website that allows a user to follow questions on certain topics from his network. A user's news-feed wall comprises of only those questions that have been posted by his connections and tagged on the topics that he is following(his expertise topics). I am confused what database's datamodel would be most fitting for such an application. The project needs to consider the future provisions for scalability and high performance issues. I have been looking at Cassandra and MySQL solutions as of now. After my study of Cassandra I realized that Simple news-feed design that shows all the posts from network would be easy to design using Cassandra by executing fast writes to all followers of a user about the post from user. But for my kind of application where there is an additional filter of 'followed topics', (ie, the user receives posts "created by his network" && "on topics user is following"), I could not convince myself with a good schema design in Cassandra. I hope if I missed something because of my short understanding of cassandra, perhaps, can you please help me out with your suggestions of how this news-feed could be implemented in Cassandra ? Looking for a great project with Cassandra ! Edit: There are going to be maximum 5 tags allowed for tagging the question (ie, max 5 topics can be tagged on a question).

    Read the article

  • How can I change the CSS visibility style for elements that are not on the screen?

    - by RenderIn
    I have a lot of data being placed into a <DIV> with the overflow: auto style. Firefox handles this gracefully but IE becomes very sluggish both when scrolling the div and when executing any Javascript on the page. At first I thought IE just couldn't handle that much data in its DOM, but then I did a simple test where I applied the visibility: hidden style to every element past the first 100. They still take up space and cause the scrollbars to appear. IE no longer had a problem with the data when I did this. So, I'd like to have a "smart" div that hides all the nested div elements which are not currently visible on the screen. Is there a simple solution to this or will I need to have an infinite loop which calculates the location of the scrollbar? If not, is there a particular event that I can hook into where I could do this? Is there a jQuery selector or plugin that will allow me to select all elements not currently visible on the screen?

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >