Search Results

Search found 31410 results on 1257 pages for 'disk based'.

Page 863/1257 | < Previous Page | 859 860 861 862 863 864 865 866 867 868 869 870  | Next Page >

  • L2E many to many query

    - by 5YrsLaterDBA
    I have four tables: Users PrivilegeGroups rdPrivileges LinkPrivilege ----------- ---------------- --------------- --------------- userId(pk) privilegeGroupId(pk) privilegeId(pk) privilegeId(pk, fk) privilegeGroupId(fk) name code privilegeGroupId(pk, fk) L2E will not create LinkPrivilege entity for me. So we only have Users, PrivilegeGroups and rdPrivileges entities. PrivilegeGroups and rdPrivileges are many to many relationship. What I need to do is retrieve all code from rdPrivileges table based on a passed in userId. How can I do it? EDIT working code: var acc = from u in db.Users from pg in db.PrivilegeGroups from p in pg.rdPrivileges where u.UserId == userId && u.PrivilegeGroups.PrivilegeGroupId == pg.PrivilegeGroupId select p.Code;

    Read the article

  • wsimport and header params for logging

    - by Milan
    I have this situation. Generating form based on the WSDL. I made it but I came to the situation when the wsimport tool generates classes with methods with params for header(for authentication) and the params are not just simple types. But some complex. The problem is that I dont know which classes will be generated so I need simple types for the methods. @WebMethod(operationName = "DNSLookup", action = "http://www.strikeiron.com/DNSLookup") @WebResult(name = "DNSLookupResult", targetNamespace = "http://www.strikeiron.com") @RequestWrapper(localName = "DNSLookup", targetNamespace = "http://www.strikeiron.com", className = "invoker.DNSLookup") @ResponseWrapper(localName = "DNSLookupResponse", targetNamespace = "http://www.strikeiron.com", className = "invoker.DNSLookupResponse") public SIWsOutputOfDNSInfo dnsLookup( @WebParam(name = "HostNameOrIPAddress", targetNamespace = "http://www.strikeiron.com") String hostNameOrIPAddress, @WebParam(name = "LicenseInfo", targetNamespace = "http://ws.strikeiron.com", header = true, partName = "LicenseInfo") LicenseInfo licenseInfo, @WebParam(name = "SubscriptionInfo", targetNamespace = "http://ws.strikeiron.com", header = true, mode = WebParam.Mode.OUT, partName = "SubscriptionInfo") Holder<SubscriptionInfo> subscriptionInfo); you can see that LicenseInfo licenseInfo, and Holder subscriptionInfo);? Is it possible to somehow to specify to have simple types for header params?

    Read the article

  • The subscription model behind CSS selectors?

    - by Martin Kristiansen
    With CSS selectors a query string body > h1.span subscribes to a specific type of nodes in the tree. Does anyone know how this is done? Selectors for transformations, how does the browser select the result set? And is there a trick to making it efficient? I imagine there being some sort of hierarchical type-tree for the entire structure to which the nodes subscribe and which is what is used when doing the selector queries — but this is only a guess. Does anyone know the real answer? Or even more interesting, what would be the best way to do dynamic lookups on a tree based on jQuery/CSS search queries?

    Read the article

  • Fit Lightbox container in window if image is larger

    - by Bobe
    I'm just looking for a simple way to set the max width and height of the Lightbox container and image based on the window size if the image is larger than the current window size. So say the image is 2000x1200 and the window is 1280x1024, then the max-height and max-width of div.lb-outerContainer and img.lb-image should be set to $(window).height() - 286, $(window).width() - 60 and $(window).height() - 306, $(window).width() - 80 respectively. I'm just having a bit of trouble determining where to go about implementing these rules. Do I do it in the lightbox.js file? If so, where? Would it be acceptable to just throw in some script on the page it's used on?

    Read the article

  • how to set flex combobox cursor position

    - by crazy horse
    I have a combobox implementation as follows - Based on user input (min 2 chars) in the editable combobox, the data provider is refreshed and drop-down opened, showing different data sets as user input varies. Problem is that after drop-down opens, the cursor moves back to the beginning. So for instance, the user types in "ab", and wants to type in "c" to form the search string "abc". Due to the cursor re-setting its position to 0, the search string instead ends up as "cab". Here's what I tried already (doesn't work) : textInput.mx_internal::getTextField().setSelection(index, index); where index = length of user input. This selects text from index to index (which effectively un-selects text) and is supposed to place the cursor at the end. Any thoughts?

    Read the article

  • Value of text box disapears - binding viewmodel to a tab (content control)

    - by Eli Perpinyal
    Based on the MVVM example by Josh Smith, I have implemented the multi tab option which binds to a different tab to a different view model using a simple datatemplate that binds a viewmodel to a view. <DataTemplate DataType="{x:Type fixtureVM:SearchViewModel}"> <SearchVw:SearchView/> </DataTemplate> The issue that I'm having, is when I switch tabs and then switch back again, the value in the textbox disappears. When I bind the Text in the textbox to a value in the ViewModel it does not disappear. This is fine, and I can overcome this but I am having another issue for example with the position of the scroll bar in a grid disappearing once the tab has lost focus. Why is the value disappearing? I'm assuming it is a WPF sub system task that cleans up resources!? how can I avoid this? I also feel it might be slowing down my app.

    Read the article

  • Test Column exists, Add Column, and Update Column

    - by david.clarke
    I'm trying to write a SQL Server database update script. I want to test for the existence of a column in a table, then if it doesn't exist add the column with a default value, and finally update that column based on the current value of a different column in the same table. I want this script to be runnable multiple times, the first time updating the table and on subsequent runs the script should be ignored. My script currently looks like the following: IF NOT EXISTS(SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'PurchaseOrder' AND COLUMN_NAME = 'IsDownloadable') BEGIN ALTER TABLE [dbo].[PurchaseOrder] ADD [IsDownloadable] bit NOT NULL DEFAULT 0 UPDATE [dbo].[PurchaseOrder] SET [IsDownloadable] = 1 WHERE [Ref] IS NOT NULL END SQL Server returns error "Invalid column name 'IsDownloadable'", i.e. I need to commit the DDL before I can update the column. I've tried various permutations but I'm getting nowhere fast.

    Read the article

  • Best and simple data structure

    - by anshu
    I am trying to create the below matrix in my vb.net so during processing I can get the match scores for the alphabets, for example: What is the match for A and N?, I will look into my inbuilt matrix and return -2 Similarly, What is the match for P and L?, I will look into my inbuilt matrix and return -3 Please suggest me how to go about it, I was trying to use nested dictionary like this: Dim myNestedDictionary As New Dictionary(Of String, Dictionary(Of String, Integer))() Dim lTempDict As New Dictionary(Of String, Integer) lTempDict.Add("A", 4) myNestedDictionary.Add("A", lTempDict) The other way could be is to read the Matrix from a text based file and then fill the two dimensional array. Thanks.

    Read the article

  • Does JSONP scale? How many JSONP requests can I send?

    - by Cheeso
    Based on Please explain JSONP, I understand that JSONP can be used to get around the same-origin policy. But in order to do that, the page must use a <script> tag. I know that pages can dynamically emit new script tags, such as with: <script type="text/javascript" language='javascript'> document.write('<script type="text/javascript" ' + 'id="contentloadtag" defer="defer" ' + 'src="javascript:void(0)"><\/script>'); var contentloadtag=document.getElementById("contentloadtag"); contentloadtag.onreadystatechange=function(){ if (this.readyState=="complete") { init(); } } </script> (the above works in IE, don't think it works in FF). ... but does this mean, effectively, that every JSONP call requires me to emit another <script> tag into the document? Can I remove the <script> tags that are done?

    Read the article

  • How do continuously update data to an asp page?

    - by Lori
    Hi, I have an asp page based on a very simple database. It references a single table of probably 30 records and maybe 12 data fields and everything works great as I am only uploading a new database every week or so. I have a special circumstance where I would like upload new data to the database and display automatically on the page every 20 to 30 seconds without the user having to refresh their screen. I would expect up to 1000 concurrent users accessing the data. I have been manually uploading the database via ftp, which will obviously not work on this timeline and would also run the risk of error pages as the database is being replaced. So, can anyone point me the right direction to setup this scenario? Other details that might be helpful: The database is an Access database (but I could change to another format if needed) Running on Windows platform hosted by an ISP, not my own server Thanks in advance for any help on this! Lori

    Read the article

  • Referencing a control programatically with external source

    - by James
    Forgive me about the title, I have no idea what this is called. I have a MS Access database set up, with a Period field that has either values 1, 2, 3, 4 or 5. I retrieve these values using a database connection and I would like to reference a particular control based on what period was grabbed from the database. Here's example code, pseudo of course. TextBox(dr(3)).Text = dr(0) dr(3) contains the period, and dr(0) contains the content I would like to put into the text box. I have these text boxes on my form: TextBox1, TextBox2, TextBox3, TextBox4 and TextBox5. So if dr(3) contained 2 then I would want to reference TextBox2. I hope I've made myself clear, any help would be greatly appreciated, thanks. :)

    Read the article

  • How to link paid app user account to the system ?

    - by user164589
    Hi guys, I have an issue related publishing the paid app to android market. (My application is internet connection based app.) If I've put the app to the android market, can user who bought the app pass to anyone ? How is its security (I mean safe of .apk file) ? Also, what is payment tool of android market ? My main point is choosing the best way to link paid user to our system. Actually I don't know how to link paid user account to my system(by email address or device unique id ?... what is better way ?). Can you suggest me on this part ? I really appreciate for help. Thanks in advance.

    Read the article

  • XML to DOC to PDF

    - by Max
    what is the easiest(and fastest) way to perform this kind of transformation: "Data in XML" to "Some MS Word 2003 Supported format" to PDF using Java? My first guess was to fill the template with XML data (using Placeholders for example) and then save it and convert it to PDF. But I can't just put placeholders to DOC files, and I can't convert from some other Word formats to PDF... My primary task is to convert XML Data to PDF allowing users to change the PDF on-demand. The best way to change the PDF on-demand seems to give user some kind of MS Word readable document, and then convert it back. There are 2 main problems with this task: 1) I can't use OpenOffice for conversion. 2) System should be able to convert ~1 page of table-based document per 1 second on 2Ghz Core. 3) RTF does not provide enough styling, so some more complex format should be used. Thanks in advance.

    Read the article

  • Could not load file or assembly log4net or one of its dependencies

    - by Elie
    I've been asked to take a look at an error in an ASP/C# application with its Paypal integration. The error, shown in full, is: Could not load file or assembly 'log4net, Version=1.2.0.30714, Culture=neutral, PublicKeyToken=b32731d11ce58905' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) From what I understand, this means that the actual file located (that is, log4net.dll in my bin directory) does not match the version expected based on some assembly configuration. The problem I'm having is that I cannot locate where this file is being referenced. I have access to all the files in the web root directory of the site, and cannot locate any config files that reference this DLL. Where else might I need to look to determine what's causing the mis-match? As a note, I've made sure that the version of the DLL in the bin directory is up to date, but this does not seem to have resolved anything.

    Read the article

  • Windows Azure Service Bus Splitter and Aggregator

    - by Alan Smith
    This article will cover basic implementations of the Splitter and Aggregator patterns using the Windows Azure Service Bus. The content will be included in the next release of the “Windows Azure Service Bus Developer Guide”, along with some other patterns I am working on. I’ve taken the pattern descriptions from the book “Enterprise Integration Patterns” by Gregor Hohpe. I bought a copy of the book in 2004, and recently dusted it off when I started to look at implementing the patterns on the Windows Azure Service Bus. Gregor has also presented an session in 2011 “Enterprise Integration Patterns: Past, Present and Future” which is well worth a look. I’ll be covering more patterns in the coming weeks, I’m currently working on Wire-Tap and Scatter-Gather. There will no doubt be a section on implementing these patterns in my “SOA, Connectivity and Integration using the Windows Azure Service Bus” course. There are a number of scenarios where a message needs to be divided into a number of sub messages, and also where a number of sub messages need to be combined to form one message. The splitter and aggregator patterns provide a definition of how this can be achieved. This section will focus on the implementation of basic splitter and aggregator patens using the Windows Azure Service Bus direct programming model. In BizTalk Server receive pipelines are typically used to implement the splitter patterns, with sequential convoy orchestrations often used to aggregate messages. In the current release of the Service Bus, there is no functionality in the direct programming model that implements these patterns, so it is up to the developer to implement them in the applications that send and receive messages. Splitter A message splitter takes a message and spits the message into a number of sub messages. As there are different scenarios for how a message can be split into sub messages, message splitters are implemented using different algorithms. The Enterprise Integration Patterns book describes the splatter pattern as follows: How can we process a message if it contains multiple elements, each of which may have to be processed in a different way? Use a Splitter to break out the composite message into a series of individual messages, each containing data related to one item. The Enterprise Integration Patterns website provides a description of the Splitter pattern here. In some scenarios a batch message could be split into the sub messages that are contained in the batch. The splitting of a message could be based on the message type of sub-message, or the trading partner that the sub message is to be sent to. Aggregator An aggregator takes a stream or related messages and combines them together to form one message. The Enterprise Integration Patterns book describes the aggregator pattern as follows: How do we combine the results of individual, but related messages so that they can be processed as a whole? Use a stateful filter, an Aggregator, to collect and store individual messages until a complete set of related messages has been received. Then, the Aggregator publishes a single message distilled from the individual messages. The Enterprise Integration Patterns website provides a description of the Aggregator pattern here. A common example of the need for an aggregator is in scenarios where a stream of messages needs to be combined into a daily batch to be sent to a legacy line-of-business application. The BizTalk Server EDI functionality provides support for batching messages in this way using a sequential convoy orchestration. Scenario The scenario for this implementation of the splitter and aggregator patterns is the sending and receiving of large messages using a Service Bus queue. In the current release, the Windows Azure Service Bus currently supports a maximum message size of 256 KB, with a maximum header size of 64 KB. This leaves a safe maximum body size of 192 KB. The BrokeredMessage class will support messages larger than 256 KB; in fact the Size property is of type long, implying that very large messages may be supported at some point in the future. The 256 KB size restriction is set in the service bus components that are deployed in the Windows Azure data centers. One of the ways of working around this size restriction is to split large messages into a sequence of smaller sub messages in the sending application, send them via a queue, and then reassemble them in the receiving application. This scenario will be used to demonstrate the pattern implementations. Implementation The splitter and aggregator will be used to provide functionality to send and receive large messages over the Windows Azure Service Bus. In order to make the implementations generic and reusable they will be implemented as a class library. The splitter will be implemented in the LargeMessageSender class and the aggregator in the LargeMessageReceiver class. A class diagram showing the two classes is shown below. Implementing the Splitter The splitter will take a large brokered message, and split the messages into a sequence of smaller sub-messages that can be transmitted over the service bus messaging entities. The LargeMessageSender class provides a Send method that takes a large brokered message as a parameter. The implementation of the class is shown below; console output has been added to provide details of the splitting operation. public class LargeMessageSender {     private static int SubMessageBodySize = 192 * 1024;     private QueueClient m_QueueClient;       public LargeMessageSender(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public void Send(BrokeredMessage message)     {         // Calculate the number of sub messages required.         long messageBodySize = message.Size;         int nrSubMessages = (int)(messageBodySize / SubMessageBodySize);         if (messageBodySize % SubMessageBodySize != 0)         {             nrSubMessages++;         }           // Create a unique session Id.         string sessionId = Guid.NewGuid().ToString();         Console.WriteLine("Message session Id: " + sessionId);         Console.Write("Sending {0} sub-messages", nrSubMessages);           Stream bodyStream = message.GetBody<Stream>();         for (int streamOffest = 0; streamOffest < messageBodySize;             streamOffest += SubMessageBodySize)         {                                     // Get the stream chunk from the large message             long arraySize = (messageBodySize - streamOffest) > SubMessageBodySize                 ? SubMessageBodySize : messageBodySize - streamOffest;             byte[] subMessageBytes = new byte[arraySize];             int result = bodyStream.Read(subMessageBytes, 0, (int)arraySize);             MemoryStream subMessageStream = new MemoryStream(subMessageBytes);               // Create a new message             BrokeredMessage subMessage = new BrokeredMessage(subMessageStream, true);             subMessage.SessionId = sessionId;               // Send the message             m_QueueClient.Send(subMessage);             Console.Write(".");         }         Console.WriteLine("Done!");     }} The LargeMessageSender class is initialized with a QueueClient that is created by the sending application. When the large message is sent, the number of sub messages is calculated based on the size of the body of the large message. A unique session Id is created to allow the sub messages to be sent as a message session, this session Id will be used for correlation in the aggregator. A for loop in then used to create the sequence of sub messages by creating chunks of data from the stream of the large message. The sub messages are then sent to the queue using the QueueClient. As sessions are used to correlate the messages, the queue used for message exchange must be created with the RequiresSession property set to true. Implementing the Aggregator The aggregator will receive the sub messages in the message session that was created by the splitter, and combine them to form a single, large message. The aggregator is implemented in the LargeMessageReceiver class, with a Receive method that returns a BrokeredMessage. The implementation of the class is shown below; console output has been added to provide details of the splitting operation.   public class LargeMessageReceiver {     private QueueClient m_QueueClient;       public LargeMessageReceiver(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public BrokeredMessage Receive()     {         // Create a memory stream to store the large message body.         MemoryStream largeMessageStream = new MemoryStream();           // Accept a message session from the queue.         MessageSession session = m_QueueClient.AcceptMessageSession();         Console.WriteLine("Message session Id: " + session.SessionId);         Console.Write("Receiving sub messages");           while (true)         {             // Receive a sub message             BrokeredMessage subMessage = session.Receive(TimeSpan.FromSeconds(5));               if (subMessage != null)             {                 // Copy the sub message body to the large message stream.                 Stream subMessageStream = subMessage.GetBody<Stream>();                 subMessageStream.CopyTo(largeMessageStream);                   // Mark the message as complete.                 subMessage.Complete();                 Console.Write(".");             }             else             {                 // The last message in the sequence is our completeness criteria.                 Console.WriteLine("Done!");                 break;             }         }                     // Create an aggregated message from the large message stream.         BrokeredMessage largeMessage = new BrokeredMessage(largeMessageStream, true);         return largeMessage;     } }   The LargeMessageReceiver initialized using a QueueClient that is created by the receiving application. The receive method creates a memory stream that will be used to aggregate the large message body. The AcceptMessageSession method on the QueueClient is then called, which will wait for the first message in a message session to become available on the queue. As the AcceptMessageSession can throw a timeout exception if no message is available on the queue after 60 seconds, a real-world implementation should handle this accordingly. Once the message session as accepted, the sub messages in the session are received, and their message body streams copied to the memory stream. Once all the messages have been received, the memory stream is used to create a large message, that is then returned to the receiving application. Testing the Implementation The splitter and aggregator are tested by creating a message sender and message receiver application. The payload for the large message will be one of the webcast video files from http://www.cloudcasts.net/, the file size is 9,697 KB, well over the 256 KB threshold imposed by the Service Bus. As the splitter and aggregator are implemented in a separate class library, the code used in the sender and receiver console is fairly basic. The implementation of the main method of the sending application is shown below.   static void Main(string[] args) {     // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Open the input file.     FileStream fileStream = new FileStream(AccountDetails.TestFile, FileMode.Open);       // Create a BrokeredMessage for the file.     BrokeredMessage largeMessage = new BrokeredMessage(fileStream, true);       Console.WriteLine("Sending: " + AccountDetails.TestFile);     Console.WriteLine("Message body size: " + largeMessage.Size);     Console.WriteLine();         // Send the message with a LargeMessageSender     LargeMessageSender sender = new LargeMessageSender(queueClient);     sender.Send(largeMessage);       // Close the messaging facory.     factory.Close();  } The implementation of the main method of the receiving application is shown below. static void Main(string[] args) {       // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Create a LargeMessageReceiver and receive the message.     LargeMessageReceiver receiver = new LargeMessageReceiver(queueClient);     BrokeredMessage largeMessage = receiver.Receive();       Console.WriteLine("Received message");     Console.WriteLine("Message body size: " + largeMessage.Size);       string testFile = AccountDetails.TestFile.Replace(@"\In\", @"\Out\");     Console.WriteLine("Saving file: " + testFile);       // Save the message body as a file.     Stream largeMessageStream = largeMessage.GetBody<Stream>();     largeMessageStream.Seek(0, SeekOrigin.Begin);     FileStream fileOut = new FileStream(testFile, FileMode.Create);     largeMessageStream.CopyTo(fileOut);     fileOut.Close();       Console.WriteLine("Done!"); } In order to test the application, the sending application is executed, which will use the LargeMessageSender class to split the message and place it on the queue. The output of the sender console is shown below. The console shows that the body size of the large message was 9,929,365 bytes, and the message was sent as a sequence of 51 sub messages. When the receiving application is executed the results are shown below. The console application shows that the aggregator has received the 51 messages from the message sequence that was creating in the sending application. The messages have been aggregated to form a massage with a body of 9,929,365 bytes, which is the same as the original large message. The message body is then saved as a file. Improvements to the Implementation The splitter and aggregator patterns in this implementation were created in order to show the usage of the patterns in a demo, which they do quite well. When implementing these patterns in a real-world scenario there are a number of improvements that could be made to the design. Copying Message Header Properties When sending a large message using these classes, it would be great if the message header properties in the message that was received were copied from the message that was sent. The sending application may well add information to the message context that will be required in the receiving application. When the sub messages are created in the splitter, the header properties in the first message could be set to the values in the original large message. The aggregator could then used the values from this first sub message to set the properties in the message header of the large message during the aggregation process. Using Asynchronous Methods The current implementation uses the synchronous send and receive methods of the QueueClient class. It would be much more performant to use the asynchronous methods, however doing so may well affect the sequence in which the sub messages are enqueued, which would require the implementation of a resequencer in the aggregator to restore the correct message sequence. Handling Exceptions In order to keep the code readable no exception handling was added to the implementations. In a real-world scenario exceptions should be handled accordingly.

    Read the article

  • java-COM interop: Implement COM interface in Java

    - by mdma
    How can I implement a vtable COM interface in java? In the old days, I'd use the Microsft JVM, which had built in java-COM interop. What's the equivalent for a modern JRE? Answers to a similar SO question proposed JACOB. I've looked at JACOB, but that is based on IDispatch, and is aimed at controlling Automation serers. The COM interfaces I need are custom vtable (extend IUnknown), e.g. IPersistStream, IOleWindow, IContextMenu etc. For my use case, I could implement all the COM specifics in JNI, and have the JNI layer call corresponding interfaces in java. But I'm hoping for a less painful solution. It's for an open source project, so open source alternatives are preferred.

    Read the article

  • Conditional Operator in SQL Where Clause

    - by Marc
    I'm wishing I could do something like the following in SQl Server 2005 (which I know isnt valid) for my where clause. Sometimes @teamID (passed into a stored procedure) will be the value of an existing teamID, otherwise it will always be zero and I want all rows from the Team table. I researched using Case and the operator needs to come before or after the entire statement which prevents me from having a different operator based on the value of @teamid. Any suggestions other than duplicating my select statements. declare @teamid int set @teamid = 0 Select Team.teamID From Team case @teamid when 0 then WHERE Team.teamID > 0 else WHERE Team.teamID = @teamid end

    Read the article

  • Can't get FCKEditor to work in a virtual directory.

    - by AngryHacker
    I have a WebForm that contains the following definition for the FCKeditor: <FCKeditorV2:FCKeditor ID="txtBody" runat="server" BasePath="/fckeditor/" Height="480px" ToolbarSet="WebCal1" > </FCKeditorV2:FCKeditor> This works fine in my VS2008-based web application. However, when I deploy it to a Virtual Directory in IIS, it looks for the FCKEditor files (e.g. javascript, stylesheets, etc...) in the /fckeditor folder, not in the /MyVirtualDir/fkceditor. I've tried changing the BasePath to ~/fckeditor/, but then it won't work on my dev machine. What is the right way to go, so that the FCKEditor maps onto the right directory. In my project the fckeditor directory is right off the root.

    Read the article

  • How to make my iPhone app compatible with iOS 4?

    - by Davide
    Hello, My iphoneos 3.1 based application is not working on iOS 4 GM: the camera is not showing in full screen, it doesn't correctly detects compass information, the uiwebviews doesn't respond to touches (they don't scroll), and so on. It's completely broken! Now my question is: how can I develop an update using the latest xcode with support for ios 4? The latest iOS 4 xcode (3.2.3) doesn't provide any way to develop for iPhoneOS 3.x ("base sdk missing"). By the other side, xcode 3.2.2 would not allow me to debug it on a iOS 4 device, so I can't test it.

    Read the article

  • Websockify to wrap a forking server

    - by Gurjeet Singh
    I came across Websockify [1] and the accompanying Websock client-side javascript library. AIUI from the Wrap a Programsection in README, Websockify can help you launch a TCP server and rebind its port so that incoming Websockets-based communication is parsed and forwarded to the server on the proper (rebinded) port. My question is, can this mechanism be used to wrap a server that forks its children which in turn communicate with the client on a different port. Specifically, I am interested in websockifying a Postgres server, which typically listens on port 5432 and for a new incoming connection it forks a child which serves all future request from that client. (If it helps, Oracle RDBMS and many other servers, RDBMS or not, also use similar method.) [1] https://github.com/kanaka/websockify

    Read the article

  • Embedding swank-clojure in java program

    - by user237417
    Based on the Embedding section of http://github.com/technomancy/swank-clojure, I'm using the following to test it out. Is there a better way to do this that doesn't use Compiler? Is there a way to programmatically stop swank? It seems start-repl takes control of the thread. What would be a good way to spawn off another thread for it and be able to kill that thread programatically. import clojure.lang.Compiler; import java.io.StringReader; public class Embed { public static void main(String[] args) throws Exception { final String startSwankScript = "(ns my-app\n" + " (:use [swank.swank :as swank]))\n" + "(swank/start-repl) "; Compiler.load(new StringReader(startSwankScript)); } } Any help much appreciated, hhh

    Read the article

  • Visual Studio Package for 2005/2008/2010 ??

    - by asp2go
    We are looking to turn an internal tool we have developed into a Visual Studio Package that we would sell to other developers. The tool will impact the custom editor and/or custom languages. Visual Studio 2010 has redesigned the API's heavily to simplify much of the work involved for these types of integration but the key question we have is: What is the typical adoption pace of new Visual Studio versions? Is there any information out there on adoption rates based on history? How many shops are still using 2005? This will help us to consider whether to target just 2010 using the new APIs or whether trying to go back and support 2008 (maybe 2005) and testing it forward.

    Read the article

  • JIRA JQL searching by date - is there a way of getting Today() (Date) instead of Now() (DateTime)

    - by Shevek
    I am trying to create some Issue Filters in JIRA based on CreateDate. The only date/time function I can find is Now() and searches relative to that, i.e. "-1d", "-4d" etc. The only problem with this is that Now() is time specific so there is no way of getting a particular day's created issues. i.e. Created < Now() AND Created >= "-1d" when run at 2pm today will show all issues created from 2pm yesterday to 2pm today when run at 9am tomorrow will show all issues created from 9am today to 9am tomorrow What I want is to be able to search for all issues created from 00:00 to 23:59 on any day. Is this possible?

    Read the article

  • RSS feed for gas prices and how to intepret the feed

    - by subh
    I am trying to adda RSS feed of gas prices based on location to my application. I google for RSS feed for gas prices and bumped onto Motortrend's gas price feed http://www.motortrend.com/widgetrss/gas- The feed seems to be fine, but the price value seem to be depcited in alphatbets as below Chevron 3921 Irvine Blvd, Irvine, CA 92602 (0.0 miles) Monday, May 10, 2010 9:16 AM Regular: ZEIECHK Plus: ZEHGIHC Premium: ZEGJEGE Diesel: N/A How do I interpret these value to come up with a value for the gas price? Or is it internal to Motortrend's and cannot be used elsewhere?

    Read the article

  • Looking into Enum Support in Entity Framework 5.0 Code First

    - by nikolaosk
    In this post I will show you with a hands-on demo the enum support that is available in Visual Studio 2012, .Net Framework 4.5 and Entity Framework 5.0. You can have a look at this post to learn about the support of multilple diagrams per model that exists in Entity Framework 5.0. We will demonstrate this with a step by step example. I will use Visual Studio 2012 Ultimate. You can also use Visual Studio 2012 Express Edition. Before I move on to the actual demo I must say that in EF 5.0 an enumeration can have the following types. Byte Int16 Int32 Int64 Sbyte Obviously I cannot go into much detail on what EF is and what it does. I will give again a short introduction.The .Net framework provides support for Object Relational Mapping through EF. So EF is a an ORM tool and it is now the main data access technology that microsoft works on. I use it quite extensively in my projects. Through EF we have many things out of the box provided for us. We have the automatic generation of SQL code.It maps relational data to strongly types objects.All the changes made to the objects in the memory are persisted in a transactional way back to the data store. You can find in this post an example on how to use the Entity Framework to retrieve data from an SQL Server Database using the "Database/Schema First" approach. In this approach we make all the changes at the database level and then we update the model with those changes. In this post you can see an example on how to use the "Model First" approach when working with ASP.Net and the Entity Framework. This model was firstly introduced in EF version 4.0 and we could start with a blank model and then create a database from that model.When we made changes to the model , we could recreate the database from the new model. You can search in my blog, because I have posted many posts regarding ASP.Net and EF. I assume you have a working knowledge of C# and know a few things about EF. The Code First approach is the more code-centric than the other two. Basically we write POCO classes and then we persist to a database using something called DBContext. Code First relies on DbContext. We create 2,3 classes (e.g Person,Product) with properties and then these classes interact with the DbContext class. We can create a new database based upon our POCOS classes and have tables generated from those classes.We do not have an .edmx file in this approach.By using this approach we can write much easier unit tests. DbContext is a new context class and is smaller,lightweight wrapper for the main context class which is ObjectContext (Schema First and Model First). Let's begin building our sample application. 1) Launch Visual Studio. Create an ASP.Net Empty Web application. Choose an appropriate name for your application. 2) Add a web form, default.aspx page to the application. 3) Now we need to make sure the Entity Framework is included in our project. Go to Solution Explorer, right-click on the project name.Then select Manage NuGet Packages...In the Manage NuGet Packages dialog, select the Online tab and choose the EntityFramework package.Finally click Install. Have a look at the picture below   4) Create a new folder. Name it CodeFirst . 5) Add a new item in your application, a class file. Name it Footballer.cs. This is going to be a simple POCO class.Place it in the CodeFirst folder. The code follows public class Footballer { public int FootballerID { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public double Weight { get; set; } public double Height { get; set; } public DateTime JoinedTheClub { get; set; } public int Age { get; set; } public List<Training> Trainings { get; set; } public FootballPositions Positions { get; set; } }    Now I am going to define my enum values in the same class file, Footballer.cs    public enum FootballPositions    {        Defender,        Midfielder,        Striker    } 6) Now we need to create the Training class. Add a new class to your application and place it in the CodeFirst folder.The code for the class follows.     public class Training     {         public int TrainingID { get; set; }         public int TrainingDuration { get; set; }         public string TrainingLocation { get; set; }     }   7) Then we need to create a context class that inherits from DbContext.Add a new class to the CodeFirst folder.Name it FootballerDBContext.Now that we have the entity classes created, we must let the model know.I will have to use the DbSet<T> property.The code for this class follows       public class FootballerDBContext:DbContext     {         public DbSet<Footballer> Footballers { get; set; }         public DbSet<Training> Trainings { get; set; }     } Do not forget to add  (using System.Data.Entity;) in the beginning of the class file 8) We must take care of the connection string. It is very easy to create one in the web.config.It does not matter that we do not have a database yet.When we run the DbContext and query against it,it will use a connection string in the web.config and will create the database based on the classes. In my case the connection string inside the web.config, looks like this      <connectionStrings>    <add name="CodeFirstDBContext"  connectionString="server=.\SqlExpress;integrated security=true;"  providerName="System.Data.SqlClient"/>                       </connectionStrings>   9) Now it is time to create Linq to Entities queries to retrieve data from the database . Add a new class to your application in the CodeFirst folder.Name the file DALfootballer.cs We will create a simple public method to retrieve the footballers. The code for the class follows public class DALfootballer     {         FootballerDBContext ctx = new FootballerDBContext();         public List<Footballer> GetFootballers()         {             var query = from player in ctx.Footballers where player.FirstName=="Jamie" select player;             return query.ToList();         }     }   10) Place a GridView control on the Default.aspx page and leave the default name.Add an ObjectDataSource control on the Default.aspx page and leave the default name. Set the DatasourceID property of the GridView control to the ID of the ObjectDataSource control.(DataSourceID="ObjectDataSource1" ). Let's configure the ObjectDataSource control. Click on the smart tag item of the ObjectDataSource control and select Configure Data Source. In the Wizzard that pops up select the DALFootballer class and then in the next step choose the GetFootballers() method.Click Finish to complete the steps of the wizzard. Build your application.  11)  Let's create an Insert method in order to insert data into the tables. I will create an Insert() method and for simplicity reasons I will place it in the Default.aspx.cs file. private void Insert()        {            var footballers = new List<Footballer>            {                new Footballer {                                 FirstName = "Steven",LastName="Gerrard", Height=1.85, Weight=85,Age=32, JoinedTheClub=DateTime.Parse("12/12/1999"),Positions=FootballPositions.Midfielder,                Trainings = new List<Training>                             {                                     new Training {TrainingDuration = 3, TrainingLocation="MelWood"},                    new Training {TrainingDuration = 2, TrainingLocation="Anfield"},                    new Training {TrainingDuration = 2, TrainingLocation="MelWood"},                }                            },                            new Footballer {                                  FirstName = "Jamie",LastName="Garragher", Height=1.89, Weight=89,Age=34, JoinedTheClub=DateTime.Parse("12/02/2000"),Positions=FootballPositions.Defender,                Trainings = new List<Training>                                             {                                 new Training {TrainingDuration = 3, TrainingLocation="MelWood"},                new Training {TrainingDuration = 5, TrainingLocation="Anfield"},                new Training {TrainingDuration = 6, TrainingLocation="Anfield"},                }                           }                    };            footballers.ForEach(foot => ctx.Footballers.Add(foot));            ctx.SaveChanges();        }   12) In the Page_Load() event handling routine I called the Insert() method.        protected void Page_Load(object sender, EventArgs e)        {                   Insert();                }  13) Run your application and you will see that the following result,hopefully. You can see clearly that the data is returned along with the enum value.  14) You must have also a look at the database.Launch SSMS and see the database and its objects (data) created from EF Code First.Have a look at the picture below. Hopefully now you have seen the support that exists in EF 5.0 for enums.Hope it helps !!!

    Read the article

< Previous Page | 859 860 861 862 863 864 865 866 867 868 869 870  | Next Page >