Search Results

Search found 41623 results on 1665 pages for 'type constraints'.

Page 1574/1665 | < Previous Page | 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581  | Next Page >

  • PHP will not delete from MySQL

    - by Michal Kopanski
    For some reason, JavaScript/PHP wont delete my data from MySQL! Here is the rundown of the problem. I have an array that displays all my MySQL entries in a nice format, with a button to delete the entry for each one individually. It looks like this: <?php include("login.php"); //connection to the database $dbhandle = mysql_connect($hostname, $username, $password) or die("<br/><h1>Unable to connect to MySQL, please contact support at [email protected]</h1>"); //select a database to work with $selected = mysql_select_db($dbname, $dbhandle) or die("Could not select database."); //execute the SQL query and return records if (!$result = mysql_query("SELECT `id`, `url` FROM `videos`")) echo 'mysql error: '.mysql_error(); //fetch tha data from the database while ($row = mysql_fetch_array($result)) { ?> <div class="video"><a class="<?php echo $row{'id'}; ?>" href="http://www.youtube.com/watch?v=<?php echo $row{'url'}; ?>">http://www.youtube.com/watch?v=<?php echo $row{'url'}; ?></a><a class="del" href="javascript:confirmation(<? echo $row['id']; ?>)">delete</a></div> <?php } //close the connection mysql_close($dbhandle); ?> The delete button has an href of javascript:confirmation(<? echo $row['id']; ?>) , so once you click on delete, it runs this: <script type="text/javascript"> <!-- function confirmation(ID) { var answer = confirm("Are you sure you want to delete this video?") if (answer){ alert("Entry Deleted") window.location = "delete.php?id="+ID; } else{ alert("No action taken") } } //--> </script> The JavaScript should theoretically pass the 'ID' onto the page delete.php. That page looks like this (and I think this is where the problem is): <?php include ("login.php"); mysql_connect($hostname, $username, $password) or die("Unable to connect to MySQL"); mysql_select_db ($dbname) or die("Unable to connect to database"); mysql_query("DELETE FROM `videos` WHERE `videos`.`id` ='.$id.'"); echo ("Video has been deleted."); ?> If there's anyone out there that may know the answer to this, I would greatly appreciate it. I am also opened to suggestions (for those who aren't sure). Thanks!

    Read the article

  • Is the Scala 2.8 collections library a case of "the longest suicide note in history" ?

    - by oxbow_lakes
    First note the inflammatory subject title is a quotation made about the manifesto of a UK political party in the early 1980s. This question is subjective but it is a genuine question, I've made it CW and I'd like some opinions on the matter. Despite whatever my wife and coworkers keep telling me, I don't think I'm an idiot: I have a good degree in mathematics from the University of Oxford and I've been programming commercially for almost 12 years and in Scala for about a year (also commercially). I have just started to look at the Scala collections library re-implementation which is coming in the imminent 2.8 release. Those familiar with the library from 2.7 will notice that the library, from a usage perspective, has changed little. For example... > List("Paris", "London").map(_.length) res0: List[Int] List(5, 6) ...would work in either versions. The library is eminently useable: in fact it's fantastic. However, those previously unfamiliar with Scala and poking around to get a feel for the language now have to make sense of method signatures like: def map[B, That](f: A => B)(implicit bf: CanBuildFrom[Repr, B, That]): That For such simple functionality, this is a daunting signature and one which I find myself struggling to understand. Not that I think Scala was ever likely to be the next Java (or /C/C++/C#) - I don't believe its creators were aiming it at that market - but I think it is/was certainly feasible for Scala to become the next Ruby or Python (i.e. to gain a significant commercial user-base) Is this going to put people off coming to Scala? Is this going to give Scala a bad name in the commercial world as an academic plaything that only dedicated PhD students can understand? Are CTOs and heads of software going to get scared off? Was the library re-design a sensible idea? If you're using Scala commercially, are you worried about this? Are you planning to adopt 2.8 immediately or wait to see what happens? Steve Yegge once attacked Scala (mistakenly in my opinion) for what he saw as its overcomplicated type-system. I worry that someone is going to have a field day spreading fud with this API (similarly to how Josh Bloch scared the JCP out of adding closures to Java). Note - I should be clear that, whilst I believe that Josh Bloch was influential in the rejection of the BGGA closures proposal, I don't ascribe this to anything other than his honestly-held beliefs that the proposal represented a mistake.

    Read the article

  • SQL Server 2008 Filestream Impersonation Access Denied error

    - by Adi
    I've been trying to upload a file to the database using SQL SERVER 2008 Filestream and Impersonation technique to save the file in the file system, but i keep getting Access Denied error; even though i've set the permissions for the impersonating user to the Filestream folder(C:\SQLFILESTREAM\Dev_DB). when i debugged the code, i found the server return a unc path(\Server_Name\MSSQLSERVER\v1\Dev_LMDB\dbo\FileData\File_Data\13C39AB1-8B91-4F5A-81A1-940B58504C17), which was not accessible through windows explorer. I've my web application hosted on local maching(Windows 7). SQL Server is located on a remote server(Windows Server 2008 R2). Sql authentication was used to call the stored procedure. Following is the code i've used to do the above operations. SqlCommand sqlCmd = new SqlCommand("AddFile"); sqlCmd.CommandType = CommandType.StoredProcedure; sqlCmd.Parameters.Add("@File_Name", SqlDbType.VarChar, 512).Value = filename; sqlCmd.Parameters.Add("@File_Type", SqlDbType.VarChar, 5).Value = Path.GetExtension(filename); sqlCmd.Parameters.Add("@Username", SqlDbType.VarChar, 20).Value = username; sqlCmd.Parameters.Add("@Output_File_Path", SqlDbType.VarChar, -1).Direction = ParameterDirection.Output; DAManager PDAM = new DAManager(DAManager.getConnectionString()); using (SqlConnection connection = (SqlConnection)PDAM.CreateConnection()) { connection.Open(); SqlTransaction transaction = connection.BeginTransaction(); WindowsImpersonationContext wImpersonationCtx; NetworkSecurity ns = null; try { PDAM.ExecuteNonQuery(sqlCmd, transaction); string filepath = sqlCmd.Parameters["@Output_File_Path"].Value.ToString(); sqlCmd = new SqlCommand("SELECT GET_FILESTREAM_TRANSACTION_CONTEXT()"); sqlCmd.CommandType = CommandType.Text; byte[] Context = (byte[])PDAM.ExecuteScalar(sqlCmd, transaction); byte[] buffer = new byte[4096]; int bytedRead; ns = new NetworkSecurity(); wImpersonationCtx = ns.ImpersonateUser(IMP_Domain, IMP_Username, IMP_Password, LogonType.LOGON32_LOGON_INTERACTIVE, LogonProvider.LOGON32_PROVIDER_DEFAULT); SqlFileStream sfs = new SqlFileStream(filepath, Context, System.IO.FileAccess.Write); while ((bytedRead = inFS.Read(buffer, 0, buffer.Length)) != 0) { sfs.Write(buffer, 0, bytedRead); } sfs.Close(); transaction.Commit(); } catch (Exception ex) { transaction.Rollback(); } finally { sqlCmd.Dispose(); connection.Close(); connection.Dispose(); ns.undoImpersonation(); wImpersonationCtx = null; ns = null; } } Can someone help me with this issue. Reference Exception: Type : System.ComponentModel.Win32Exception, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Message : Access is denied Source : System.Data Help link : NativeErrorCode : 5 ErrorCode : -2147467259 Data : System.Collections.ListDictionaryInternal TargetSite : Void OpenSqlFileStream(System.String, Byte[], System.IO.FileAccess, System.IO.FileOptions, Int64) Stack Trace : at System.Data.SqlTypes.SqlFileStream.OpenSqlFileStream(String path, Byte[] transactionContext, FileAccess access, FileOptions options, Int64 allocationSize) at System.Data.SqlTypes.SqlFileStream..ctor(String path, Byte[] transactionContext, FileAccess access, FileOptions options, Int64 allocationSize) at System.Data.SqlTypes.SqlFileStream..ctor(String path, Byte[] transactionContext, FileAccess access) Thanks

    Read the article

  • Trouble binding command in grid menu item.

    - by Pete
    I have a grid that's inside a usercontrol derived class called MediatedUserControl. I'm adding a context menu to let the user delete an item, but I've been unable to figure out how to bind the command to my command property. I'm using MVVM and my viewmodel implements a public ICommand property called DeleteSelectedItemCommand. However, when the view is displayed, I get the following message in the output window: System.Windows.Data Error: 4 : Cannot find source for binding with reference 'RelativeSource FindAncestor, AncestorType='BRO.View.MediatedUserControl', AncestorLevel='1''. BindingExpression:Path=DataContext.DeleteSelectedItemCommand; DataItem=null; target element is 'BarButtonItem' (HashCode=6860584); target property is 'Command' (type 'ICommand') I feel like I generally have a good handle on bindings like this and can't figure out what it is I'm missing here. Thanks for any help you can provide. <dxg:GridControl HorizontalAlignment="Left" Margin="12,88,0,0" x:Name="gridControl1" VerticalAlignment="Top" Height="500" Width="517" DataSource="{Binding ItemList}" BorderBrush="{StaticResource {x:Static SystemColors.ActiveBorderBrushKey}}" ShowBorder="True" Background="{StaticResource {x:Static SystemColors.ControlLightBrushKey}}" UseLayoutRounding="False" DataContext="{Binding}"> <dxg:GridControl.Columns> <dxg:GridColumn FieldName="Code" Header="Code" Width="107" /> <dxg:GridColumn FieldName="Name" Header="Item" Width="173" /> <dxg:GridColumn FieldName="PricePerItem" Header="Unit Price" Width="70"> <dxg:GridColumn.EditSettings> <dxe:TextEditSettings DisplayFormat="N2" /> </dxg:GridColumn.EditSettings> </dxg:GridColumn> <dxg:GridColumn FieldName="Quantity" Header="Qty" Width="50" AllowEditing="True" /> <dxg:GridColumn FieldName="TotalPrice" Header="Total Price" Width="90"> <dxg:GridColumn.EditSettings> <dxe:TextEditSettings DisplayFormat="N2" /> </dxg:GridColumn.EditSettings> </dxg:GridColumn> </dxg:GridControl.Columns> <dxg:GridControl.View> <dxg:TableView ShowIndicator="False" ShowGroupPanel="False" MultiSelectMode="Row" AllowColumnFiltering="False" AllowBestFit="False" AllowFilterEditor="False" AllowEditing="False" AllowGrouping="False" AllowSorting="False" AllowResizing="False" AllowMoving="False" AllowMoveColumnToDropArea="False" AllowDateTimeGroupIntervalMenu="False" > <dxg:TableView.RowCellMenuCustomizations> <dxb:BarButtonItem Name="deleteRowItem" Command="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType=view:MediatedUserControl, AncestorLevel=1}, Path=DataContext.DeleteSelectedItemCommand}"> </dxb:BarButtonItem> </dxg:TableView.RowCellMenuCustomizations> </dxg:TableView> </dxg:GridControl.View>

    Read the article

  • Mocking objects with complex Lambda Expressions as parameters

    - by iCe
    Hi there, I´m encountering this problem trying to mock some objects that receive complex lambda expressions in my projects. Mostly with with proxy objects that receive this type of delegate: Func<Tobj, Fun<TParam1, TParam2, TResult>> I have tried to use Moq as well as RhinoMocks to acomplish mocking those types of objects, however both fail. (Moq fails with NotSupportedException, and in RhinoMocks simpy does not satisgy expectation). This is simplified example of what I´m trying to do: I have a Calculator object that does calculations: public class Calculator { public Calculator() { } public int Add(int x, int y) { var result = x + y; return result; } public int Substract(int x, int y) { var result = x - y; return result; } } I need to validate parameters on every method in the Calculator class, so to keep with the Single Responsability principle, I create a validator class. I wire everything up using a Proxy class, that prevents having duplicate code: public class CalculatorProxy : CalculatorExample.ICalculatorProxy { private ILimitsValidator _validator; public CalculatorProxy(Calculator _calc, ILimitsValidator _validator) { this.Calculator = _calc; this._validator = _validator; } public int Operation(Func&lt;Calculator, Func&lt;int, int, int&gt;&gt; operation, int x, int y) { _validator.ValidateArgs(x, y); var calcMethod = operation(this.Calculator); var result = calcMethod(x, y); _validator.ValidateResult(result); return result; } public Calculator Calculator { get; private set; } } Now, I´m testing a component that does use the CalculatorProxy, so I want to mock it, for example using Rhino Mocks: [TestMethod] public void ParserWorksWithCalcultaroProxy() { var calculatorProxyMock = MockRepository.GenerateMock&lt;ICalculatorProxy&gt;(); calculatorProxyMock.Expect(x =&gt; x.Calculator).Return(_calculator); calculatorProxyMock.Expect(x =&gt; x.Operation(c =&gt; c.Add, 2, 2)).Return(4); var mathParser = new MathParser(calculatorProxyMock); mathParser.ProcessExpression("2 + 2"); calculatorProxyMock.VerifyAllExpectations(); } However I cannot get it to work! Any ideas about how this can be done? Thanks a lot!

    Read the article

  • Connect to running web role on Azure using Remote Desktop Connection and VS2012

    - by Magnus Karlsson
    We want to be able to collect IntelliTrace information from our running app and also use remote desktop to connect to the IIS and look around(probably debugging). 1. Create certificate 1.1 Right-click the cloud project (marked in red) and select “Configure remote desktop”. 1.2 In the drop down list of certificates, choose <create> at the bottom. 1.3. Follow the instructions, you can set it up with default values. 1.4 When done. Choose the certificate and click “Copy to File…” as seen in the left of the picture above. 1.5. Save the file with any name you want. Now we will save it to local storage to be able to import it to our solution through the azure configuration manager in step 3. 2. Save certificate to local storage Now we need to attach it to our local certificate storage to be able to reach it from our confiuguration manager in visual studio. Microsoft provides the following steps for doing this: http://support.microsoft.com/kb/232137 In order to view the Certificates store on the local computer, perform the following steps: Click Start, and then click Run. Type "MMC.EXE" (without the quotation marks) and click OK. Click Console in the new MMC you created, and then click Add/Remove Snap-in. In the new window, click Add. Highlight the Certificates snap-in, and then click Add. Choose the Computer option and click Next. Select Local Computer on the next screen, and then click OK. Click Close , and then click OK. You have now added the Certificates snap-in, which will allow you to work with any certificates in your computer's certificate store. You may want to save this MMC for later use. Now that you have access to the Certificates snap-in, you can import the server certificate into you computer's certificate store by following these steps: Open the Certificates (Local Computer) snap-in and navigate to Personal, and then Certificates. Note: Certificates may not be listed. If it is not, that is because there are no certificates installed. Right-click Certificates (or Personal if that option does not exist.) Choose All Tasks, and then click Import. When the wizard starts, click Next. Browse to the PFX file you created containing your server certificate and private key. Click Next. Enter the password you gave the PFX file when you created it. Be sure the Mark the key as exportable option is selected if you want to be able to export the key pair again from this computer. As an added security measure, you may want to leave this option unchecked to ensure that no one can make a backup of your private key. Click Next, and then choose the Certificate Store you want to save the certificate to. You should select Personal because it is a Web server certificate. If you included the certificates in the certification hierarchy, it will also be added to this store. Click Next. You should see a summary of screen showing what the wizard is about to do. If this information is correct, click Finish. You will now see the server certificate for your Web server in the list of Personal Certificates. It will be denoted by the common name of the server (found in the subject section of the certificate). Now that you have the certificate backup imported into the certificate store, you can enable Internet Information Services 5.0 to use that certificate (and the corresponding private key). To do this, perform the following steps: Open the Internet Services Manager (under Administrative Tools) and navigate to the Web site you want to enable secure communications (SSL/TLS) on. Right-click on the site and click Properties. You should now see the properties screen for the Web site. Click the Directory Security tab. Under the Secure Communications section, click Server Certificate. This will start the Web Site Certificate Wizard. Click Next. Choose the Assign an existing certificate option and click Next. You will now see a screen showing that contents of your computer's personal certificate store. Highlight your Web server certificate (denoted by the common name), and then click Next. You will now see a summary screen showing you all the details about the certificate you are installing. Be sure that this information is correct or you may have problems using SSL or TLS in HTTP communications. Click Next, and then click OK to exit the wizard. You should now have an SSL/TLS-enabled Web server. Be sure to protect your PFX files from any unwanted personnel. Image of a typical MMC.EXE with the certificates up.   3. Import the certificate to you visual studio project. 3.1 Now right click your equivalent to the MvcWebRole1 (as seen in the first picture under the red oval) and choose properties. 3.2 Choose Certificates. Right click the ellipsis to the right of the “thumbprint” and you should be able to select your newly created certificate here. After selecting it- save the file.   4. Upload the certificate to your Azure subscription. 4.1 Go to the azure management portal, click the services menu icon to the left and choose the service. Click Upload in the bottom menu.     5. Connect to server. Since I tried to use account settings(have to use another name) we have to set up a new name for the connection. No biggie. 5.1 Go to azure management portal, select your service and in the bottom menu, choose “REMOTE”. This will display the configuration for remote connection. It will actually change your ServiceConfiguration.cscfg file. After you change It here it might be good to choose download and replace the one in your project. Set a name that is not your windows azure account name and not Administrator. 5.2 Goto visual studio, click Server Explorer. Choose as selected in the picture below and click “COnnect using remote desktop”.   5.2 You will now be able to log in with the name and password set up in step 5.1. and voila! Windows server 2012, IIS and other nice stuff!   To do this one I’ve been using http://msdn.microsoft.com/en-us/library/windowsazure/ff683671.aspx where you can collect some of this information and additional one.

    Read the article

  • Using Jquery.Form Plugin + MultiFile to automatically upload a single file

    - by Alan Neal
    I wanted to find a way to upload a single file*, in the background, have it start automatically after file selection, and not require a flash uploader, so I am trying to use two great mechanisms (jQuery.Form and JQuery MultiFile) together. I haven't succeeded, but I'm pretty sure it's because I'm missing something fundamental. Just using MultiFile, I define the form as follows... <form id="photoForm" action="image.php" method="post" enctype="multipart/form-data"> The file input button is defined as... <input id="photoButton" "name="sourceFile" class="photoButton max-1 accept-jpg" type="file"> And the Javascript is... $('#photoButton').MultiFile({ afterFileSelect: function(){ document.getElementById("photoForm").submit(); } }); This works perfectly. As soon as the user selects a single file, MultiFile submits the form to the server. If instead of using MultiFile, as shown above, let's say I include a Submit button along with the JQuery Form plugin defined as follows... var options = { success: respondToUpload }; $('#photoForm').ajaxForm(options); ... this also works perfectly. When the Submit button is clicked, the form is uploaded in the background. What I don't know how to do is get these two to work together. If I use Javascript to submit the form (as shown in the MultiFile example above), the form is submitted but the JQuery.Form function is not called, so the form does not get submitted in the background. I thought that maybe I needed to change the form registration as follows... $('#photoForm').submit(function() { $('#photoForm').ajaxForm(options); }); ...but that didn't solve the problem. The same is true when I tried .ajaxSubmit instead of .ajaxForm. What am I missing? BTW: I know it might sound strange to use MultiFile for single-file uploads, but the idea is that the number of files will be dynamic based on the user's account. So, I'm starting with one but the number changes depending on conditions.

    Read the article

  • Tools to help with analysing log files

    - by peter
    I am developing a C# .NET application. In the app.config file I add trace logging as shown, <?xml version="1.0" encoding="UTF-8" ?> <configuration> <system.diagnostics> <trace autoflush="true" /> <sources> <source name="System.Net.Sockets" maxdatasize="1024"> <listeners> <add name="MyTraceFile"/> </listeners> </source> </sources> <sharedListeners> <add name="MyTraceFile" type="System.Diagnostics.TextWriterTraceListener" initializeData="System.Net.trace.log" /> </sharedListeners> <switches> <add name="System.Net" value="Verbose" /> </switches> </system.diagnostics> </configuration> Are there any good tools around to analyse the log file that is output? The output looks like this, System.Net.Sockets Verbose: 0 : [5900] Data from Socket#8764489::Send DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000000 : 4D 49 4D 45 2D 56 65 72-73 69 6F 6E 3A 20 31 2E : MIME-Version: 1. DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000060 : 65 3A 20 37 20 41 70 72-20 32 30 31 30 20 31 35 : e: 7 Apr 2010 15 DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000070 : 3A 32 32 3A 34 30 20 2B-31 32 30 30 0D 0A 53 75 : :22:40 +1200..Su DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000080 : 62 6A 65 63 74 3A 20 5B-45 72 72 6F 72 5D 20 45 : bject: [Error] E DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000090 : 78 63 65 70 74 69 6F 6E-20 69 6E 20 53 79 6E 63 : xception in Sync DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 000000A0 : 53 65 72 76 69 63 65 20-28 32 30 30 38 2E 30 2E : Service (2008.0. DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 000000B0 : 33 30 34 2E 31 32 33 34-32 29 0D 0A 43 6F 6E 74 : 304.12342)..Cont DateTime=2010-04-07T03:22:40.1067012Z Is there anything that can take the output shown above (my output is a text file 100mb in size), group together packets, and help out with finding particular issues I would like to hear about it. Thanks.

    Read the article

  • Is O_NONBLOCK being set a property of the file descriptor or underlying file?

    - by Daniel Trebbien
    From what I have been reading on The Open Group website on fcntl, open, read, and write, I get the impression that whether O_NONBLOCK is set on a file descriptor, and hence whether non-blocking I/O is used with the descriptor, should be a property of that file descriptor rather than the underlying file. Being a property of the file descriptor means, for example, that if I duplicate a file descriptor or open another descriptor to the same file, then I can use blocking I/O with one and non-blocking I/O with the other. Experimenting with a FIFO, however, it appears that it is not possible to have a blocking I/O descriptor and non-blocking I/O descriptor to the FIFO simultaneously (so whether O_NONBLOCK is set is a property of the underlying file [the FIFO]): #include <errno.h> #include <fcntl.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> int main(int argc, char **argv) { int fds[2]; if (pipe(fds) == -1) { fprintf(stderr, "`pipe` failed.\n"); return EXIT_FAILURE; } int fd0_dup = dup(fds[0]); if (fd0_dup <= STDERR_FILENO) { fprintf(stderr, "Failed to duplicate the read end\n"); return EXIT_FAILURE; } if (fds[0] == fd0_dup) { fprintf(stderr, "`fds[0]` should not equal `fd0_dup`.\n"); return EXIT_FAILURE; } if ((fcntl(fds[0], F_GETFL) & O_NONBLOCK)) { fprintf(stderr, "`fds[0]` should not have `O_NONBLOCK` set.\n"); return EXIT_FAILURE; } if (fcntl(fd0_dup, F_SETFL, fcntl(fd0_dup, F_GETFL) | O_NONBLOCK) == -1) { fprintf(stderr, "Failed to set `O_NONBLOCK` on `fd0_dup`\n"); return EXIT_FAILURE; } if ((fcntl(fds[0], F_GETFL) & O_NONBLOCK)) { fprintf(stderr, "`fds[0]` should still have `O_NONBLOCK` unset.\n"); return EXIT_FAILURE; // RETURNS HERE } char buf[1]; if (read(fd0_dup, buf, 1) != -1) { fprintf(stderr, "Expected `read` on `fd0_dup` to fail immediately\n"); return EXIT_FAILURE; } else if (errno != EAGAIN) { fprintf(stderr, "Expected `errno` to be `EAGAIN`\n"); return EXIT_FAILURE; } return EXIT_SUCCESS; } This leaves me thinking: is it ever possible to have a non-blocking I/O descriptor and blocking I/O descriptor to the same file and if so, does it depend on the type of file (regular file, FIFO, block special file, character special file, socket, etc.)?

    Read the article

  • A couple questions using fwrite/fread with data structures

    - by Nazgulled
    Hi, I'm using fwrite() and fread() for the first time to write some data structures to disk and I have a couple of questions about best practices and proper ways of doing things. What I'm writing to disk (so I can later read it back) is all user profiles inserted in a Graph structure. Each graph vertex is of the following type: typedef struct sUserProfile { char name[NAME_SZ]; char address[ADDRESS_SZ]; int socialNumber; char password[PASSWORD_SZ]; HashTable *mailbox; short msgCount; } UserProfile; And this is how I'm currently writing all the profiles to disk: void ioWriteNetworkState(SocialNetwork *social) { Vertex *currPtr = social->usersNetwork->vertices; UserProfile *user; FILE *fp = fopen("save/profiles.dat", "w"); if(!fp) { perror("fopen"); exit(EXIT_FAILURE); } fwrite(&(social->usersCount), sizeof(int), 1, fp); while(currPtr) { user = (UserProfile*)currPtr->value; fwrite(&(user->socialNumber), sizeof(int), 1, fp); fwrite(user->name, sizeof(char)*strlen(user->name), 1, fp); fwrite(user->address, sizeof(char)*strlen(user->address), 1, fp); fwrite(user->password, sizeof(char)*strlen(user->password), 1, fp); fwrite(&(user->msgCount), sizeof(short), 1, fp); break; currPtr = currPtr->next; } fclose(fp); } Notes: The first fwrite() you see will write the total user count in the graph so I know how much data I need to read back. The break is there for testing purposes. There's thousands of users and I'm still experimenting with the code. My questions: After reading this I decided to use fwrite() on each element instead of writing the whole structure. I also avoid writing the pointer to to the mailbox as I don't need to save that pointer. So, is this the way to go? Multiple fwrite()'s instead of a global one for the whole structure? Isn't that slower? How do I read back this content? I know I have to use fread() but I don't know the size of the strings, cause I used strlen() to write them. I could write the output of strlen() before writing the string, but is there any better way without extra writes?

    Read the article

  • Merge replication stopping without errors in SQL 2008 R2

    - by Rob Farley
    A non-SQL MVP friend of mine, who also happens to be a client, asked me for some help again last week. I was planning on writing this up even before Rob Volk (@sql_r) listed his T-SQL Tuesday topic for this month. Earlier in the year, I (well, LobsterPot Solutions, although I’d been the person mostly involved) had helped out with a merge replication problem. The Merge Agent on the subscriber was just stopping every time, shortly after it started. With no errors anywhere – not in the Windows Event Log, the SQL Agent logs, not anywhere. We’d managed to get the system working again, but didn’t have a good reason about what had happened, and last week, the problem occurred again. I asked him about writing up the experience in a blog post, largely because of the red herrings that we encountered. It was an interesting experience for me, also because I didn’t end up touching my computer the whole time – just tapping on my phone via Twitter and Live Msgr. You see, the thing with replication is that a useful troubleshooting option is to reinitialise the thing. We’d done that last time, and it had started to work again – eventually. I say eventually, because the link being used between the sites is relatively slow, and it took a long while for the initialisation to finish. Meanwhile, we’d been doing some investigation into what the problem could be, and were suitably pleased when the problem disappeared. So I got a message saying that a replication problem had occurred again. Reinitialising wasn’t going to be an option this time either. In this scenario, the subscriber having the problem happened to be in a different domain to the publisher. The other subscribers (within the domain) were fine, just this one in a different domain had the problem. Part of the problem seemed to be a log file that wasn’t being backed up properly. They’d been trying to back up to a backup device that had a corruption, and the log file was growing. Turned out, this wasn’t related to the problem, but of course, any time you’re troubleshooting and you see something untoward, you wonder. Having got past that problem, my next thought was that perhaps there was a problem with the account being used. But the other subscribers were using the same account, without any problems. The client pointed out that that it was almost exactly six months since the last failure (later shown to be a complete red herring). It sounded like something might’ve expired. Checking through certificates and trusts showed no sign of anything, and besides, there wasn’t a problem running a command-prompt window using the account in question, from the subscriber box. ...except that when he ran the sqlcmd –E –S servername command I recommended, it failed with a Named Pipes error. I’ve seen problems with firewalls rejecting connections via Named Pipes but letting TCP/IP through, so I got him to look into SQL Configuration Manager to see what kind of connection was being preferred... Everything seemed fine. And strangely, he could connect via Management Studio. Turned out, he had a typo in the servername of the sqlcmd command. That particular red herring must’ve been reflected in his cheeks as he told me. During the time, I also pinged a friend of mine to find out who I should ask, and Ted Kruger (@onpnt) ‘s name came up. Ted (and thanks again, Ted – really) reconfirmed some of my thoughts around the idea of an account expiring, and also suggesting bumping up the logging to level 4 (2 is Verbose, 4 is undocumented ridiculousness). I’d just told the client to push the logging up to level 2, but the log file wasn’t appearing. Checking permissions showed that the user did have permission on the folder, but still no file was appearing. Then it was noticed that the user had been switched earlier as part of the troubleshooting, and switching it back to the real user caused the log file to appear. Still no errors. A lot more information being pushed out, but still no errors. Ted suggested making sure the FQDNs were okay from both ends, in case the servers were unable to talk to each other. DNS problems can lead to hassles which can stop replication from working. No luck there either – it was all working fine. Another server started to report a problem as well. These two boxes were both SQL 2008 R2 (SP1), while the others, still working, were SQL 2005. Around this time, the client tried an idea that I’d shown him a few years ago – using a Profiler trace to see what was being called on the servers. It turned out that the last call being made on the publisher was sp_MSenumschemachange. A quick interwebs search on that showed a problem that exists in SQL Server 2008 R2, when stored procedures have more than 4000 characters. Running that stored procedure (with the same parameters) manually on SQL 2005 listed three stored procedures, the first of which did indeed have more than 4000 characters. Still no error though, and the problem as listed at http://support.microsoft.com/kb/2539378 describes an error that should occur in the Event log. However, this problem is the type of thing that is fixed by a reinitialisation (because it doesn’t need to send the procedure change across as a transaction). And a look in the change history of the long stored procs (you all keep them, right?), showed that the problem from six months earlier could well have been down to this too. Applying SP2 (with sufficient paranoia about backups and how to get back out again if necessary) fixed the problem. The stored proc changes went through immediately after the service pack was applied, and it’s been running happily since. The funny thing is that I didn’t solve the problem. He had put the Profiler trace on the server, and had done the search that found a forum post pointing at this particular problem. I’d asked Ted too, and although he’d given some useful information, nothing that he’d come up with had actually been the solution either. Sometimes, asking for help is the most useful thing you can do. Often though, you don’t end up getting the help from the person you asked – the sounding board is actually what you need. @rob_farley

    Read the article

  • Coming Up with a Good Algorithm for a Simple Idea

    - by mkoryak
    I need to come up with an algorithm that does the following: Lets say you have an array of positive numbers (e.g. [1,3,7,0,0,9]) and you know beforehand their sum is 20. You want to abstract some average amount from each number such that the new sum would be less by 7. To do so, you must follow these rules: you can only subtract integers the resulting array must not have any negative values you can not make any changes to the indices of the buckets. The more uniformly the subtraction is distributed over the array the better. Here is my attempt at an algorithm in JavaScript + underscore (which will probably make it n^2): function distributeSubtraction(array, goal){ var sum = _.reduce(arr, function(x, y) { return x + y; }, 0); if(goal < sum){ while(goal < sum && goal > 0){ var less = ~~(goal / _.filter(arr, _.identity).length); //length of array without 0s arr = _.map(arr, function(val){ if(less > 0){ return (less < val) ? val - less : val; //not ideal, im skipping some! } else { if(goal > 0){ //again not ideal. giving preference to start of array if(val > 0) { goal--; return val - 1; } } else { return val; } } }); if(goal > 0){ var newSum = _.reduce(arr, function(x, y) { return x + y; }, 0); goal -= sum - newSum; sum = newSum; } else { return arr; } } } else if(goal == sum) { return _.map(arr, function(){ return 0; }); } else { return arr; } } var goal = 7; var arr = [1,3,7,0,0,9]; var newArray = distributeSubtraction(arr, goal); //returned: [0, 1, 5, 0, 0, 7]; Well, that works but there must be a better way! I imagine the run time of this thing will be terrible with bigger arrays and bigger numbers. edit: I want to clarify that this question is purely academic. Think of it like an interview question where you whiteboard something and the interviewer asks you how your algorithm would behave on a different type of a dataset.

    Read the article

  • Free Document/Content Management System Using SharePoint 2010

    - by KunaalKapoor
    That’s right, it’s true. You can use the free version of SharePoint 2010 to meet your document and content management needs and even run your public facing website or an internal knowledge bank.  SharePoint Foundation 2010 is free. It may not have all the features that you get in the enterprise license but it still has enough to cater to your needs to build a document management system and replace age old file shares or folders. I’ve built a dozen content management sites for internal and public use exploiting SharePoint. There are hundreds of web content management systems out there (see CMS Matrix).  On one hand we have commercial platforms like SharePoint, SiteCore, and Ektron etc. which are the most frequently used and on the other hand there are free options like WordPress, Drupal, Joomla, and Plone etc. which are pretty common popular as well. But I would be very surprised if anyone was able to find a single CMS platform that is all things to all people. Infact not a lot of people consider SharePoint’s free version under the free CMS side but its high time organizations benefit from this. Through this blog post I wanted to present SharePoint Foundation as an option for running a FREE CMS platform. Even if you knew that there is a free version of SharePoint, what most people don’t realize is that SharePoint Foundation is a great option for running web sites of all kinds – not just team sites. It is a great option for many reasons, but in reality it is supported by Microsoft, and above all it is FREE (yay!), and it is extremely easy to get started.  From a functionality perspective – it’s hard to beat SharePoint. Even the free version, SharePoint Foundation, offers simple data connectivity (through BCS), cross browser support, accessibility, support for Office Web Apps, blogs, wikis, templates, document support, health analyzer, support for presence, and MUCH more.I often get asked: “Can I use SharePoint 2010 as a document management system?” The answer really depends on ·          What are your specific requirements? ·          What systems you currently have in place for managing documents. ·          And of course how much money you have J Benefits? Not many large organizations have benefited from SharePoint yet. For some it has been an IT project to see what they can achieve with it, for others it has been used as a collaborative platform or in many cases an extended intranet. SharePoint 2010 has changed the game slightly as the improvements that Microsoft have made have been noted by organizations, and we are seeing a lot of companies starting to build specific business applications using SharePoint as the basis, and nearly every business process will require documents at some stage. If you require a document management system and have SharePoint in place then it can be a relatively straight forward decision to use SharePoint, as long as you have reviewed the considerations just discussed. The collaborative nature of SharePoint 2010 is also a massive advantage, as specific departmental or project sites can be created quickly and easily that allow workers to interact in a variety of different ways using one source of information.  This also benefits an organization with regards to how they manage the knowledge that they have, as if all of their information is in one source then it is naturally easier to search and manage. Is SharePoint right for your organization? As just discussed, this can only be determined after defining your requirements and also planning a longer term strategy for how you will manage your documents and information. A key factor to look at is how the users would interact with the system and how much value would it get for your organization. The amount of data and documents that organizations are creating is increasing rapidly each year. Therefore the ability to archive this information, whilst keeping the ability to know what you have and where it is, is vital to any organizations management of their information life cycle. SharePoint is best used for the initial life of business documents where they need to be referenced and accessed after time. It is often beneficial to archive these to overcome for storage and performance issues. FREE CMS – SharePoint, Really? In order to show some of the completely of what comes with this free version of SharePoint 2010, I thought it would make sense to use Wikipedia (since every one trusts it as a credible source). Wikipedia shows that a web content management system typically has the following components: Document Management:   -       CMS software may provide a means of managing the life cycle of a document from initial creation time, through revisions, publication, archive, and document destruction. SharePoint is king when it comes to document management.  Version history, exclusive check-out, security, publication, workflow, and so much more.  Content Virtualization:   -       CMS software may provide a means of allowing each user to work within a virtual copy of the entire Web site, document set, and/or code base. This enables changes to multiple interdependent resources to be viewed and/or executed in-context prior to submission. Through the use of versioning, each content manager can preview, publish, and roll-back content of pages, wiki entries, blog posts, documents, or any other type of content stored in SharePoint.  The idea of each user having an entire copy of the website virtualized is a bit odd to me – not sure why anyone would need that for anything but the simplest of websites. Automated Templates:   -       Create standard output templates that can be automatically applied to new and existing content, allowing the appearance of all content to be changed from one central place. Through the use of Master Pages and Themes, SharePoint provides the ability to change the entire look and feel of site.  Of course, the older brother version of SharePoint – SharePoint Server 2010 – also introduces the concept of Page Layouts which allows page template level customization and even switching the layout of an individual page using different page templates.  I think many organizations really think they want this but rarely end up using this bit of functionality.  Easy Edits:   -       Once content is separated from the visual presentation of a site, it usually becomes much easier and quicker to edit and manipulate. Most WCMS software includes WYSIWYG editing tools allowing non-technical individuals to create and edit content. This is probably easier described with a screen cap of a vanilla SharePoint Foundation page in edit mode.  Notice the page editing toolbar, the multiple layout options…  It’s actually easier to use than Microsoft Word. Workflow management: -       Workflow is the process of creating cycles of sequential and parallel tasks that must be accomplished in the CMS. For example, a content creator can submit a story, but it is not published until the copy editor cleans it up and the editor-in-chief approves it. Workflow, it’s in there. In fact, the same workflow engine is running under SharePoint Foundation that is running under the other versions of SharePoint.  The primary difference is that with SharePoint Foundation – you need to configure the workflows yourself.   Web Standards: -       Active WCMS software usually receives regular updates that include new feature sets and keep the system up to current web standards. SharePoint is in the fourth major iteration under Microsoft with the 2010 release.  In addition to the innovation that Microsoft continuously adds, you have the entire global ecosystem available. Scalable Expansion:   -       Available in most modern WCMSs is the ability to expand a single implementation (one installation on one server) across multiple domains. SharePoint Foundation can run multiple sites using multiple URLs on a single server install.  Even more powerful, SharePoint Foundation is scalable and can be part of a multi-server farm to ensure that it will handle any amount of traffic that can be thrown at it. Delegation & Security:  -       Some CMS software allows for various user groups to have limited privileges over specific content on the website, spreading out the responsibility of content management. SharePoint Foundation provides very granular security capabilities. Read @ http://msdn.microsoft.com/en-us/library/ee537811.aspx Content Syndication:  -       CMS software often assists in content distribution by generating RSS and Atom data feeds to other systems. They may also e-mail users when updates are available as part of the workflow process. SharePoint Foundation nails it.  With RSS syndication and email alerts available out of the box, content syndication is already in the platform. Multilingual Support: -       Ability to display content in multiple languages. SharePoint Foundation 2010 supports more than 40 languages. Read More Read more @ http://msdn.microsoft.com/en-us/library/dd776256(v=office.12).aspxYou can download the free version from http://www.microsoft.com/en-us/download/details.aspx?id=5970

    Read the article

  • Hosting a WCF Service Lib through a Windows service get a System.InvalidOperationException: attempti

    - by JohnL
    I have a WCF Service Library containing five service contracts. The library is hosted through a Windows Service. Most if not all my configuration for the WCF Library is declaritive. The only thing I am doing in code for configuration is to pass the type of the class implementing the service contracts into ServiceHost. I then call Open on each of the services during the Windows Service OnStart event. Here is the error message I get: Service cannot be started. System.InvalidOperationException: Service '[Fubu.Conversion.Service1' has zero application (non-infrastructure) endpoints. This might be because no configuration file was found for your application, or because no service element matching the service name could be found in the configuration file, or because no endpoints were defined in the service element. at System.ServiceModel.Description.DispatcherBuilder.EnsureThereAreNonMexEndpoints(ServiceDescription description) at System.ServiceModel.Description.DispatcherBuilder.InitializeServiceHost(ServiceDescription description, ServiceHostBase serviceHost) at System.ServiceModel.ServiceHostBase.InitializeRuntime() at System.ServiceModel.ServiceHostBase.OnBeginOpen() at System.ServiceModel.ServiceHostBase.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open() at Fubu.RemotingHost.RemotingHost.StartServ... protected override void OnStart(string[] args) { // Uncomment to debug this properly //System.Diagnostics.Debugger.Break(); StartService1(); StartService2(); StartService3(); StartService4(); StartService5(); } Each of the above simply do the following: private void StartSecurityService() { host = new ServiceHost(typeof(Service1)); host.Open(); } Service Lib app.congfig summary <services> <service behaviorConfiguration="DefaultServiceBehavior" name="Fubu.Conversion.Service1"> <endpoint address="" binding="netTcpBinding" bindingConfiguration="TCPBindingConfig" name="Service1" bindingName="TCPEndPoint" contract="Fubu.Conversion.IService1"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexTcpBinding" bindingConfiguration="" name="mexSecurity" bindingName="TcpMetaData" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:8025/Fubu/Conversion/Service1/" /> </baseAddresses> </host> </service> ... Contract is set up as follows: namespace Fubu.Conversion.Service1 { [ServiceContract(Namespace = "net.tcp://localhost:8025/Fubu")] public interface IService1 { I have looked "high and low" for a solution without any luck. Is the answer obvious? The solution to this does not appear to be. Thanks

    Read the article

  • php adding images to another image, exact positioning

    - by user271619
    I have a cool snippet of code that works well, except one thing. The code will take an icon I want to add to an existing picture. I can position it where I want too! Which is exactly what I need to do. However, I'm stuck on one thing, concerning the placement. The code "starting position" (on the main image: navIcons.png) is from the Bottom Right. I have 2 variables: $move_left = 10; & $move_up = 8;. So, the means I can position the icon.png 10px left, and 8px up, from the bottom right corner. I really really want to start the positioning from the Top Left of the image, so I'm really moving the icon 10px right & 8px down, from the top left position of the main image. Can someone look at my code and see if I'm just missing something that inverts that starting position? function attachIcon($imgname) { $mark = imagecreatefrompng($imgname); imagesavealpha($mark, true); list($icon_width, $icon_height) = getimagesize($imgname); $img = imagecreatefrompng('images/sprites/navIcons.png'); imagesavealpha($img, true); $move_left = 10; $move_up = 9; list($mainpic_width, $mainpic_height) = getimagesize('images/sprites/navIcons.png'); imagecopy($img, $mark, $mainpic_width-$icon_width-$move_left, $mainpic_height-$icon_height-$move_up, 0, 0, $icon_width, $icon_height); imagepng($img); // display the image + positioned icon in the browser //imagepng($img,'newnavIcon.png'); // rewrite the image with icon attached. } header('Content-Type: image/png'); attachIcon('icon.png'); ? For those who are wondering why I'd even bother doing this. In a nutshell, I like to add 16x16 icons to 1 single image, while using css to display that individual icon. This does involve me downloading the image (sprite) and open photoshop, add the new icon (positioning it), and reuploading it to the server. Not a massive ordeal, but just having fun with php.

    Read the article

  • Java object to XML Elements?

    - by DaveKub
    I'm working on a webservices client app and I have it mostly working. I can retrieve and read data from the third-party webservice fine. Now I need to submit some data and I'm stuck. The classes for the objects I'm retrieving/submitting were generated from XSD files via the xjc tool. The part I'm stuck on is turning one of those objects into an XML tree to submit to the webservice. When I retrieve/send a request from/to the ws, it contains a 'payload' object. This is defined in java code as (partial listing): @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "PayloadType", propOrder = { "compressed", "document", "any", "format" }) public class PayloadType { @XmlElement(name = "Compressed") protected String compressed; @XmlElement(name = "Document") protected List<String> document; @XmlAnyElement protected List<Element> any; protected String format; public List<Element> getAny() { if (any == null) { any = new ArrayList<Element>(); } return this.any; } } The only field I'm concerned with is the 'any' field which contains an XML tree. When I retrieve data from the ws, I read that field with something like this: ('root' is of org.w3c.dom.Element type and is the result of calling 'getAny().get(0)' on the payload object) NodeList nl = root.getElementsByTagName("ns1:Process"); // "ns1:Process" is an XML node to do something with if (nl != null && nl.getLength() > 0) { for (int i = 0; i < nl.getLength(); i++) { Element proc = (Element) nl.item(i); try { // do something with the 'proc' Element here... } catch (Exception ex) { // handle problems here... } } } Submitting data is where I'm stuck. How do I take a java object created from one of the classes generated from XSD and turn it into an Element object that I can add to the 'any' List of the payload object?? For instance, if I have a DailyData class and I create and populate it with data: DailyData dData = new DailyData(); dData.setID = 34; dData.setValues = "3,5,76,23"; How do I add that 'dData' object to the 'any' List of the payload object? It has to be an Element. Do I do something with a JAXBContext marshaller? I've used that to dump the 'dData' object to the screen to check the XML structure. I'm sure the answer is staring me in the face but I just can't see it! Dave

    Read the article

  • What to do when you need more verbs in REST

    - by Richard Levasseur
    There is another similar question to mine, but the discussion veered away from the problem I'm encounting. Say I have a system that deals with expense reports (ER). You can create and edit them, add attachments, and approve/reject them. An expense report might look like this: GET /er/1 => {"title": "Trip to NY", "totalcost": "400 USD", "comments": [ "john: Please add the total cost", "mike: done, can you approve it now?" ], "approvals": [ {"john": "Pending"}, {"finance-group": "Pending"}] } That looks fine, right? Thats what an expense report document looks like. If you want to update it, you can do this: POST /er/1 {"title": "Trip to NY 2010"} If you want to approve it, you can do this: POST /er/1/approval {"approved": true} But, what if you want to update the report and approve it at the same time? How do we do that? If you only wanted to approve, then doing a POST to something like /er/1/approval makes sense. We could put a flag in the URL, POST /er/1?approve=1, and send the data changes as the body, but that flag doesn't seem RESTful. We could put special field to be submitted, too, but that seems a bit hacky, too. If we did that, then why not send up data with attributes like set_title or add_to_cost? We could create a new resource for updating and approving, but (1) I can't think of how to name it without verbs, and (2) it doesn't seem right to name a resource based on what actions can be done to it (what happens if we add more actions?) We could have an X-Approve: True|False header, but headers seem like the wrong tool for the job. It'd also be difficult to get set headers without using javascript in a browser. We could use a custom media-type, application/approve+yes, but that seems no better than creating a new resource. We could create a temporary "batch operations" url, /er/1/batch/A. The client then sends multiple requests, perhaps POST /er/1/batch/A to update, then POST /er/1/batch/A/approval to approve, then POST /er/1/batch/A/status to end the batch. On the backend, the server queues up all the batch requests somewhere, then processes them in the same backend-transaction when it receives the "end batch processing" request. The downside with this is, obviously, that it introduces a lot of complexity. So, what is a good, general way to solve the problem of performing multiple actions in a single request? General because its easy to imagine additional actions that might be done in the same request: Suppress or send notifications (to email, chat, another system, whatever) Override some validation (maximum cost, names of dinner attendees) Trigger backend workflow that doesn't have a representation in the document.

    Read the article

  • Getting the executed output of an aspx page after a short delay

    - by Ankur
    Hi all, I have an aspx page which has some javascript code like <script> setTimeout("document.write('" + place.address + "');",1); </script> As it is clear from the code it will going to write something on the page after a very short delay of 1 ms. I have created an another page to get the page executed by some query string and get its output. The problem is I can not avoid the delay as simply writing document.write(place.address); will not print anything as it takes a little time to get values so if i set it in setTimeout for delayed output of 1 ms it always return me a value If I request the output from another page using System.Net.WebClient wc = new System.Net.WebClient(); System.IO.StreamReader sr = new System.IO.StreamReader(wc.OpenRead("http://localhost:4859/Default.aspx?lat=" + lat + "&lng=" + lng)); string strData = sr.ReadToEnd(); I get the source code of the document instead of the desired output can anyone help me out for either avoiding that delay or else delayed the client request ouput so that i get a desired value not the source code The js on default.aspx is <script type="text/javascript"> var geocoder; var address; function initialize() { geocoder = new GClientGeocoder(); var qs=new Querystring(); if(qs.get("lat") && qs.get("lng")) { geocoder.getLocations(new GLatLng(qs.get("lat"),qs.get("lng")),showAddress); } else { document.write("Invalid Access Or Not valid lat long is provided."); } } function getAddress(overlay, latlng) { if (latlng != null) { address = latlng; geocoder.getLocations(latlng, showAddress); } } function showAddress(r) { place = r.Placemark[0]; setTimeout("document.write('" + place.address + "');",1); //document.write(place.address); } </script> and the code on requestClient.aspx is as System.Net.WebClient wc = new System.Net.WebClient(); System.IO.StreamReader sr = new System.IO.StreamReader(wc.OpenRead("http://localhost:4859/Default.aspx?lat=" + lat + "&lng=" + lng)); string strData = sr.ReadToEnd();

    Read the article

  • IIS doing unexpected redirect

    - by user2967489
    I have website abc.com and abc.co.in.I have two webservers also. The following issue happens only in abc.co.in with same application deployed on same. We have written a custom IHttpModule and do a rewrite to abc.co.in?some=data. Expected behavior: When user enters some.abc.co.in the expected behavior is browser still display some.abc.co.in but internally call abc.co.in?some=data Actual behavior: The page is rendered properly but in browser the URL changes to some.abc.co.in?some=data I checked what is happening 1.First the server receives the request and does a 301 redirect. 2.The redirect location is some.abc.co.in?some=data I am stuck in this for a day and critical to fix to make our site up and running. How to debug this issue further ?.Any one can think of possible cause? ETW Trace shows <ApplicationData> <TraceData> <DataItem> <OldUrl>/</OldUrl> <NewUrl>/fp?&id=hazzel&params=</NewUrl> </DataItem> </TraceData> </ApplicationData> <ApplicationData> <TraceData> <DataItem> <ModuleName>DefaultDocumentModule</ModuleName> <Notification>128</Notification> <HttpStatus>301</HttpStatus> <HttpReason>Moved Permanently</HttpReason> </DataItem> </TraceData> </ApplicationData> <ApplicationData> <TraceData> <DataItem> <Headers>Content-Type: text/html; charset=UTF-8 Location: http://some.abc.co.in/fp/?id=data Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET </Headers> </DataItem> </TraceData> </ApplicationData>

    Read the article

  • How to get jQuery to work with Prototype

    - by thinkfuture
    Ok so here is the situation. Been pulling my hair out on this one. I'm a noob at this. Only been using rails for about 6 weeks. I'm using the standard setup package, and my code leverages prototype helpers heavily. Like I said, noob ;) So I'm trying to put in some jQuery effects, like PrettyPhoto. But what happens is that when the page is first loaded, PrettyPhoto works great. However, once someone uses a Prototype helper, like a link created with link_to_remote, Prettyphoto stops working. I've tried jRails, all of the fixes proposed on the JQuery site to stop conflicts... http://docs.jquery.com/Using_jQuery_with_Other_Libraries ...even done some crazy things likes renaming all of the $ in prototype.js to $$$ to no avail. Either the prototype helpers break, or jQuery breaks. Seems nothing I do can get these to work together. Any ideas? Here is part of my application.html.erb <%= javascript_include_tag 'application' %> <%= javascript_include_tag 'tooltip' %> <%= javascript_include_tag 'jquery' %> <%= javascript_include_tag 'jquery-ui' %> <%= javascript_include_tag "jquery.prettyPhoto" %> <%= javascript_include_tag 'prototype' %> <%= javascript_include_tag 'scriptalicious' %> </head> <body> <script type="text/javascript" charset="utf-8"> jQuery(document).ready( function() { jQuery("a[rel^='prettyPhoto']").prettyPhoto(); }); </script> If I put prototype before jquery, the prototype helpers don't work If I put the noconflict clause in, neither works. Thanks in advance! Chris BTW: when I try this, from the jQuery site: <script> jQuery.noConflict(); // Use jQuery via jQuery(...) jQuery(document).ready(function(){ jQuery("div").hide(); }); // Use Prototype with $(...), etc. $('someid').hide(); </script> my page disappears!

    Read the article

  • How can I include additional markup within a 'Content' inner property of an ASP.Net WebControl?

    - by GenericTypeTea
    I've searched the site and I cannot find a solution for my problem, so apologies if it's already been answered (I'm sure someone must have asked this before). I have written a jQuery Popup window that I've packaged up as a WebControl and IScriptControl. The last step is to be able to write the markup within the tags of my control. I've used the InnerProperty attribute a few times, but only for including lists of strongly typed classes. Here's my property on the WebControl: [PersistenceMode(PersistenceMode.InnerProperty)] [DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] public something??? Content { get { if (_content == null) { _content = new something???(); } return _content; } } private something??? _content; Here's the HTML Markup of what I'm after: <ctr:WebPopup runat="server" ID="win_Test" Hidden="false" Width="100px" Height="100px" Modal="true" WindowCaption="Test Window" CssClass="window"> <Content> <div style="display:none;"> <asp:Button runat="server" ID="Button1" OnClick="Button1_Click" /> </div> <%--Etc--%> <%--Etc--%> </Content> </ctr:WebPopup> Unfortunately I don't know what type my Content property should be. I basically need to replicate the UpdatePanel's ContentTemplate. EDIT: So the following allows a Template container to be automatically created, but no controls show up, what's wrong with what I'm doing? [PersistenceMode(PersistenceMode.InnerProperty)] [DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] public ITemplate Content { get { return _content; } set { _content = value; } } private ITemplate _content; EDIT2: Overriding the CreateChildControls allows the controls within the ITemplate to be rendered: protected override void CreateChildControls() { if (this.Content != null) { this.Controls.Clear(); this.Content.InstantiateIn(this); } base.CreateChildControls(); } Unfortunately I cannot now access the controls within the ITemplate from the codebehind file on the file. I.e. if I put a button within my mark as so: <ctr:WebPopup runat="server" ID="win_StatusFilter"> <Content> <asp:Button runat="server" ID="btn_Test" Text="Cannot access this from code behind?" /> </Content> </ctr:WebPopup> I then cannot access btn_Test from the code behind: protected void Page_Load(object sender, EventArgs e) { btn_Test.Text = "btn_Test is not present in Intellisense and is not accessible to the page. It does, however, render correctly."; }

    Read the article

  • Why would some POST data go missing when using Uploadify?

    - by Chad Johnson
    I have been using Uploadify in my PHP application for the last couple months, and I've been trying to track down an elusive bug. I receive emails when fatal errors occur, and they provide me a good amount of details. I've received dozens of them. I have not, however, been able to reproduce the problem myself. Some users (like myself) experience no problem, while others do. Before I give details of the problem, here is the flow. User visits edit screen for a page in the CMS I am using. Record id for the page is put into a form as a hidden value. User clicks the Uploadify browse button and selects a file (only single file selection is allowed). User clicks Submit button for my form. jQuery intercepts the form submit action, triggers Uploadify to start uploading, and returns false for the submit action (manually cancelling the form submit event so that Uploadify can take over). Uploadify uploads to a custom process script. Uploadify finishes uploading and triggers the Javascript completion callback. The Javascript callback calls $('#myForm').submit() to submit the form. Now that's what SHOULD happen. I've received reports of the upload freezing at 100% and also others where "I/O Error" is displayed. What's happening is, the form is submitting with the completion callback, but some post parameters present in the form are simply not in the post data. The id for the page, which earlier I said is added to the form as a hidden field, is simply not there in the post data ($_POST)--there is no item for 'id' in the $_POST array. The strange thing is, the post data DOES contain values for some fields. For instance, I have an input of type text called "name" which is for the record name, and it does show up in the post data. Here is what I've gathered: This has been happening on Mac OSX 10.5 and 10.6, Windows XP, and Windows 7. I can post exact user agent strings if that helps. Users must use Flash 10.0.12 or later. We've made it so the form reverts to using a normal "file" field if they have < 10.0.12. Does anyone have ANY ideas at all what the cause of this could be?

    Read the article

  • Translating itunes affiliate rss via xslt

    - by jd
    I can't get this working for the life of me. Here is a snippet of the xml I get from an RSS feed from itunes affiliate. I want top print the values within tags but I cannot for some reason: <?xml version="1.0" encoding="utf-8"?> <feed xmlns:im="http://itunes.apple.com/rss" xmlns="http://www.w3.org/2005/Atom" xml:lang="en"> <id>http://ax.itunes.apple.com/WebObjects/MZStoreServices.woa/ws/RSS/toppaidapplications/sf=143441/limit=100/genre=6014/xml</id><title>iTunes Store: Top Paid Applications</title><updated>2010-03-24T15:36:42-07:00</updated><link rel="alternate" type="text/html" href="http://itunes.apple.com/WebObjects/MZStore.woa/wa/viewTop?id=25180&amp;popId=30"/><link rel="self" href="http://ax.itunes.apple.com/WebObjects/MZStoreServices.woa/ws/RSS/toppaidapplications/sf=143441/limit=100/genre=6014/xml"/><icon>http://phobos.apple.com/favicon.ico</icon><author><name>iTunes Store</name><uri>http://www.apple.com/itunes/</uri></author><rights>Copyright 2008 Apple Inc.</rights> <entry> <updated>date</updated> <id>someID</id> <title>a</title> <im:name>b</im:name> </entry> <entry> <updated>date2/updated> <id>someID2</id> <title>a2</title> <im:name>b2</im:name> </entry> </feed> If I try <xsl:apply-templates match="entry"/> it spits out the entire contents of file. If I use <xsl:call-template name="entry"> it will show only one entry and I have to use <xsl:value-of select="//*[local-name(.)='name']"/> to get name but that's a hack. I've used xslt before for xml without namespaces and xml that has proper parent child relationships but not like this RSS feed. Notice entry is not wrapped in entries or anything. Any help is appreciated. I want to use xslt because I want to alter the itunes link to go through my affiliate account - so something automated wouldn't work for me.

    Read the article

  • Computation on db data then list them using either SimpleCursorAdapter or ArrayAdapter

    - by kc2uno
    Hi all, I juststarted programming in android a few weeks ago, so I am not entirely sure how to deal with listing values. Please help me out! I have some questions regarding displaying data sets from db in a list. Currently I have a cursor returned by my db points to a list of rows and I want display 2 columns values in a single row of the list. The row xml looks like this: <TextView android:id="@+id/text1" android:textSize="16sp" android:textStyle="bold" android:layout_width="fill_parent" android:layout_height="wrap_content"/> <TextView android:id="@+id/text2" android:textSize="14sp" android:layout_width="fill_parent" android:layout_height="wrap_content"/> so I was thinking using simplecursoradapter which supposedly makes my life easier by displaying the data in a list. However that is only true if I want to display the raw data. For the purpose of my program I need to do some computations on the raw data sets, then display them. I am not sure how to do that using SimpleCursorAdapter. Here's how I display the raw data: String[] from = new String[]{BtDbAdapter.KEY_EX_TYPE,BtDbAdapter.KEY_EX_TIMESTAMP}; int[] to = new int[]{R.id.text1, R.id.text2}; // Now create a simple cursor adapter and set it to display SimpleCursorAdapter records = new SimpleCursorAdapter(this, R.layout.exset_row, mExsetCursor, from, to); setListAdapter(records); Is there a way to do computation on the data in those rows before I bind it with the SimpleCursorAdapter? I was trying to use an alternative way of doing this by using arraylist and arrayadapter, but that way I dont know to how achieve displaying 2 items in a single row. This is my code for using arrayadapter which only display 1 text in a row instead of 2 textviews in a row: //fill in the array timestamp_arr = new ArrayList<String>(); type_arr = new ArrayList<String>(); fillRecord(); Log.d(TAG,"setting now in recordlist"); setListAdapter(new ArrayAdapter<String>(this, R.layout.list_item,timestamp_arr)); setListAdapter(new ArrayAdapter<String>(this, R.layout.list_item2,type_arr)); It's very obvious that it only displays one textview in a row because I set the second arrayadapter overwrites the first one! I was trying to use R.id.text1 and R.id.text2 for them, but it gave me some errors saying 04-23 01:40:58.658: ERROR/AndroidRuntime(3309): android.content.res.Resources$NotFoundException: Resource ID #0x7f070008 type #0x12 is not valid I believe the second method can achieve this, but I'm not sure how do deal with the layout problems, so if you any suggestions, please post them out. Thank you!!

    Read the article

  • Marshal.PtrToStructure (and back again) and generic solution for endianness swapping

    - by cgyDeveloper
    I have a system where a remote agent sends serialized structures (from and embedded C system) for me to read and store via IP/UDP. In some cases I need to send back the same structure types. I thought I had a nice setup using Marshal.PtrToStructure (receive) and Marshal.StructureToPtr (send). However, a small gotcha is that the network big endian integers need to be converted to my x86 little endian format to be used locally. When I'm sending them off again, big endian is the way to go. Here are the functions in question: private static T BytesToStruct<T>(ref byte[] rawData) where T: struct { T result = default(T); GCHandle handle = GCHandle.Alloc(rawData, GCHandleType.Pinned); try { IntPtr rawDataPtr = handle.AddrOfPinnedObject(); result = (T)Marshal.PtrToStructure(rawDataPtr, typeof(T)); } finally { handle.Free(); } return result; } private static byte[] StructToBytes<T>(T data) where T: struct { byte[] rawData = new byte[Marshal.SizeOf(data)]; GCHandle handle = GCHandle.Alloc(rawData, GCHandleType.Pinned); try { IntPtr rawDataPtr = handle.AddrOfPinnedObject(); Marshal.StructureToPtr(data, rawDataPtr, false); } finally { handle.Free(); } return rawData; } And a quick example structure that might be used like this: byte[] data = this.sock.Receive(ref this.ipep); Request request = BytesToStruct<Request>(ref data); Where the structure in question looks like: [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi, Pack = 1)] private struct Request { public byte type; public short sequence; [MarshalAs(UnmanagedType.ByValArray, SizeConst = 5)] public byte[] address; } What (generic) way can I swap the endianness when marshalling the structures? My need is such that the locally stored 'public short sequence' in this example will be little-endian for displaying to the user. I don't want to have to swap the endianness on a structure-specific way. My first thought was to use Reflection, but I'm not very familiar with that feature. Also, I hoped that there would be a better solution out there that somebody could point me towards. Thanks in advance :)

    Read the article

< Previous Page | 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581  | Next Page >