Search Results

Search found 20625 results on 825 pages for 'client'.

Page 380/825 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • jCarousel - achieving an active state AND wrap:circular

    - by swisstony
    Hey folks A while back I implemented the jCarousel image solution for a client that required a numbered active state. After a bit of googling a found the answer but noticed that the preferred circular option would not work. What would happen is that once the carousel had cycled through all its (5) images, upon the return to the first, the active state would be lost, because, according to jcarousel it was actually the 6th (the index just keeps on incrementing). I just went ahead and instead used wrap:'both' which at least had a correctly functioning active state. However now the client says they dont like this effect and simply want the animation to return to position 1 after the final image. This means I need to get'wrap: 'both' working somehow. Below is my current code. Can someone please solve this one, as its a little above my head! function highlight(carousel, obejctli,liindex,listate){ jQuery('.jcarousel-control a:nth-child('+ liindex +')').attr("class","active"); }; function removehighlight(carousel, obejctli,liindex,listate){ jQuery('.jcarousel-control a:nth-child('+ liindex +')').removeAttr("class","active"); }; jQuery('#mycarousel').jcarousel({ initCallback: mycarousel_initCallback, auto: 5, wrap: 'both', vertical: true, scroll: 1, buttonNextHTML: null, buttonPrevHTML: null, animation: 1000, itemVisibleInCallback: highlight, itemVisibleOutCallback: removehighlight }); }); Thanks in advance

    Read the article

  • Change Address/Port of WSDL EndPointAddress at runtime?

    - by Pretzel
    So I currently have 3 WSDLs added as Service References in my solution. They look like this in my app.config file (I removed the "bindings" field, because it's uninteresting): <system.serviceModel> <client> <endpoint address="http://localhost:8080/query-service/jse" binding="basicHttpBinding" bindingConfiguration="QueryBinding" contract="QueryService.Query" name="QueryPort" /> <endpoint address="http://localhost:8080/platetype-service/jse" binding="basicHttpBinding" bindingConfiguration="PlateTypeBinding" contract="PlateTypeService.PlateType" name="PlateTypePort" /> <endpoint address="http://localhost:8080/dataimport-service/jse" binding="basicHttpBinding" bindingConfiguration="DataImportBinding" contract="DataImportService.DataImport" name="DataImportPort" /> </client> </system.serviceModel> When I utilize a WSDL, it looks something like this: using (DataService.DataClient dClient = new DataService.DataClient()) { DataService.importTask impt = new DataService.importTask(); impt.String_1 = "someData"; DataService.importResponse imptr = dClient.importTask(impt); } In the "using" statement, when instantiating the DataClient object, I have 5 constructors available to me. In this scenario, I use the default constructor: new DataService.DataClient() which uses the built-in Endpoint Address string, which is fine and good. But I want the user of the application to have the option to change this value. 1) What's the best/easiest way of programatically obtaining this string? 2) Then, once I've allowed the user to edit and test the value, where should I store it? I'd prefer having it be stored in a place (like app.config or equivalent) so that there is no need for checking whether the value exists or not and whether I should be using an alternate constructor. (Looking to keep my code tight, ya know?) Any ideas? Suggestions?

    Read the article

  • Using Kerberos authentication for SQL Server 2008

    - by vivek m
    I am trying to configure my SQL Server to use Kerberos authentication. My setup is like this - My setup is like this- I have 2 virtual PCs in a Windows XP Pro SP3 host. Both VPCs are Windows Server 2003 R2. One VPC acts as the DC, DNS Server, DHCP server, has Active Directory installed and the SQL Server default instance is also running on this VPC. The second VPC is the domain member and it acts as the SQL Server client machine. I configured the SPN on the SQL Server service account to get the Kerberos working. On the client VPC it seems like it is using Kerberos authentication (as desired)- C:\Documents and Settings\administrator.SHAREPOINTSVC>sqlcmd -S vm-winsrvr2003 1> select auth_scheme from sys.dm_exec_connections where session_id=@@spid 2> go auth_scheme ---------------------------------------- KERBEROS (1 rows affected) 1> but on the server computer (where the SQL Server instance is actually running) it looks like it is still using NTLM authentication- . This is not a remote instance, the sql server is local to this machine. C:\Documents and Settings\Administrator>sqlcmd 1> select auth_scheme from sys.dm_exec_connections where session_id=@@spid 2> go auth_scheme ---------------------------------------- NTLM (1 rows affected) 1> What can i do so that it uses Kerberos on the server computer as well ? (or is this something that I should not expect)

    Read the article

  • Limit TCP requests per IP

    - by asmo
    Hello! I'm wondering how to limit the TCP requests per client (per specific IP) in Java. For example, I would like to allow a maximum of X requests per Y seconds for each client IP. I thought of using static Timer/TimerTask in combination with a HashSet of temporary restricted IPs. private static final Set<InetAddress> restrictedIPs = Collections.synchronizedSet(new HashSet<InetAddress>()); private static final Timer restrictTimer = new Timer(); So when a user connects to the server, I add his IP to the restricted list, and start a task to unrestrict him in X seconds. restrictedIPs.add(socket.getInetAddress()); restrictTimer.schedule(new TimerTask() { public void run() { restrictedIPs.remove(socket.getInetAddress()); } }, MIN_REQUEST_INTERVAL); My problem is that at the time the task will run, the socket object may be closed, and the remote IP address won't be accessible anymore... Any ideas welcomed! Also, if someone knows a Java-framework-built-in way to achieve this, I'd really like to hear it.

    Read the article

  • Setting CPU target to x86 on .NET 2.0 project adds .NET 3.5 dependencies.

    - by AngryHacker
    I have a project in VS2008 that targets .NET 2.0 framework. It was original set to build for AnyCPU. I changed it to x86 and for whatever reason, VS adds the following lines to .csproj: <ItemGroup> <BootstrapperPackage Include="Microsoft.Net.Client.3.5"> <Visible>False</Visible> <ProductName>.NET Framework Client Profile</ProductName> <Install>false</Install> </BootstrapperPackage> ... ... <BootstrapperPackage Include="Microsoft.Net.Framework.3.5.SP1"> <Visible>False</Visible> <ProductName>.NET Framework 3.5 SP1</ProductName> <Install>false</Install> </BootstrapperPackage> </ItemGroup> Can someone explain as to why this is being added and whether I can safely remove it, as I still have to target the .NET 2.0 framework. Thanks.

    Read the article

  • How To Deal With Exceptions In Large Code Bases

    - by peter
    Hi All, I have a large C# code base. It seems quite buggy at times, and I was wondering if there is a quick way to improve finding and diagnosing issues that are occuring on client PCs. The most pressing issue is that exceptions occur in the software, are caught, and even reported through to me. The problem is that by the time they are caught the original cause of the exception is lost. I.e. If an exception was caught in a specific method, but that method calls 20 other methods, and those methods each call 20 other methods. You get the picture, a null reference exception is impossible to figure out, especially if it occured on a client machine. I have currently found some places where I think errors are more likely to occur and wrapped these directly in their own try catch blocks. Is that the only solution? I could be here a long time. I don't care that the exception will bring down the current process (it is just a thread anyway - not the main application), but I care that the exceptions come back and don't help with troubleshooting. Any ideas? I am well aware that I am probably asking a question which sounds silly, and may not have a straightforward answer. All the same some discussion would be good.

    Read the article

  • Who's setting TCP window size down to 0, Indy or Windows?

    - by François
    We have an application server which have been observed sending headers with TCP window size 0 at times when the network had congestion (at a client's site). We would like to know if it is Indy or the underlying Windows layer that is responsible for adjusting the TCP window size down from the nominal 64K in adaptation to the available throughput. And we would be able to act upon it becoming 0 (nothing gets send, users wait = no good). So, any info, link, pointer to Indy code are welcome... Disclaimer: I'm not a network specialist. Please keep the answer understandable for the average me ;-) Note: it's Indy9/D2007 on Windows Server 2003 SP2. More gory details: The TCP zero window cases happen on the middle tier talking to the DB server. It happens at the same moments when end users complain of slowdowns in the client application (that's what triggered the network investigation). 2 major Network issues causing bottlenecks have been identified. The TCP zero window happened when there was network congestion, but may or may not be caused by it. We want to know when that happen and have a way to do something (logging at least) in our code. So where to hook (in Indy?) to know when that condition occurs?

    Read the article

  • Why I get this error in CXF

    - by Milan
    I want to make dynamic web service invoker in JSF with CXF. But when I load this simple code I get error. The code: JaxWsDynamicClientFactory dcf = JaxWsDynamicClientFactory.newInstance(); Client client = dcf.createClient("http://ws.strikeiron.com/IPLookup2?wsdl"); The error: No Factories configured for this Application. This happens if the faces-initialization does not work at all - make sure that you properly include all configuration settings necessary for a basic faces application and that all the necessary libs are included. Also check the logging output of your web application and your container for any exceptions! If you did that and find nothing, the mistake might be due to the fact that you use some special web-containers which do not support registering context-listeners via TLD files and a context listener is not setup in your web.xml. A typical config looks like this; org.apache.myfaces.webapp.StartupServletContextListener Caused by: java.lang.IllegalStateException - No Factories configured for this Application. This happens if the faces-initialization does not work at all - make sure that you properly include all configuration settings necessary for a basic faces application and that all the necessary libs are included. Also check the logging output of your web application and your container for any exceptions! If you did that and find nothing, the mistake might be due to the fact that you use some special web-containers which do not support registering context-listeners via TLD files and a context listener is not setup in your web.xml. A typical config looks like this; org.apache.myfaces.webapp.StartupServletContextListener Any Idea how to solve the problem?

    Read the article

  • Oracle manually add an FK constraint

    - by Oxymoron
    Alright, since a client wants to automate a certain process, which includes creating a new key structure in a LIVE database, I need to create relations between tables.columns. Now I've found the tables ALL_CONS_COLS en USER_CONSTRAINTS to hold information about constraints. If I were to manually create constraints, by inserting into these tables, I should be able to recreate the original constraints. My question: are there any more tables I should look into? Do you have an alternate suggestions, as this sounds VERY dirty and error prone to begin with. Current modus operandi: Create a new column in each table for the PK; Generate a guid for this PK; Create a new column in each table for the FKs; Fetch the guid associated with the FK; ....... done sofar...... Add new constraint based on the old one; Remove old constraint; Rename new columns; This is kind of dodgy and I'd rather change my method, any ideas would be helpful. To put it different, client wants to change key structure from int to guid on a live database. What's the best way to approach this

    Read the article

  • How do I handle the messages for a simple web-based live chat, on the server side?

    - by Carson Myers
    I'm building a simple live chat into a web application running on Django, but one thing I'm confused about is how I should store the messages between users. The chat will support multiple users, and a chat "session" is composed of users connected to one user that is the "host." The application is a sort of online document collaboration thing, so user X has a document, and users Y and Z would connect to user X to talk about the document, and that would be one chat session. If user Y disconnected for five minutes, and then signed back in and reconnected to user X, he should not get any of the messages shared between users X and Z while he was away. if users X, Y, and Z can have a chat session about user X's document, then users X and Y can connect to a simultaneous, but separate discussion about user Z's document. How should I handle this? Should I keep each message in the database? Each message would have an owner user and a target user (the host), and a separate table would be used to connect users with messages (which messages are visible to what users). Or should I store each session as an HTML file on the server, which messages get appended to? The problem is, I can't just send messages directly between clients. They have to be sent to the server in a POST request, and then each client has to periodically check for the messages in a GET request. Except I can't just have each message cleared after a client fetches it, because there could be multiple clients. How should I set this up? Any suggestions?

    Read the article

  • webclient download problem!!!

    - by user472018
    Hello all, if this problem was discussed before,sorry for asking again.. I want to download an image from an url with using System.Net.WebClient class. When i try to download an image (ie. google logo).it does not occur any errors,but some images are occurring errors.I dont understand why this errors. how can i fix this problem? my Code is: WebClient client = new WebClient(); try { //Downloads the file from the given url to the given destination client.DownloadFile(urltxt.Text, filetxt.Text); return true; } catch (WebException w) { MessageBox.Show(w.ToString()); return false; } catch (System.Security.SecurityException) { MessageBox.Show("securityexeption"); return false; } catch (Exception) { MessageBox.Show("exception"); return false; } Errors are: System.Net.WebException:The underlying connection was closed:An unexpected error occurred on a recieve.--System.IO.IOException:Unable to read data from the transport connection:An existing connection was forcibly closed by the remote host.--System.Net.Sockets.SocketException:An existing connection was forcibly closed by the remote host...bla bla Thanks for your help.

    Read the article

  • Entity Framework and WCf

    - by Nihilist
    Hi I am little confused on designing WCf services with EF. When using WCf and EF, where do we draw this line on what properties to return and what not to with the entity. Here is my scenario I have User. Here are the relations. User [1 to many] Address, User [ 1 to many] Email, User [ 1 to many] Phone So now on the webform, on page1 I can edit user information. say I can edit few properties on the user entity and can also edit address, phone, email entities[ like add / delete and update any] On page2, i can only update user properties and nothing related to navigation properties [ address, email, phone]. So when I return the User Entity [ OR DTO] should i be returning the navigation properties too? Or should the client make multiple calls to get navigation properites. Also, how does it go with Save? Like should the client make multiple calls to save user and related entites or just one call to save the graph? Lets say, if I just have a Save(User user) [ where user has all the related entities too] both page1 and page2 will call save and pass me the user. but one page1 i will need a lot more information. but on page2 i just need the user primitive properties. So my question is, where do we draw this line, how do we design theses services ? Is the WCF operation designed on the page and the fields it has ? I am hoping i explained my problem well enough.

    Read the article

  • How would you start automating my job?

    - by Jurily
    At my new job, we sell imported stuff. In order to be able to sell said stuff, currently the following things need to happen for every incoming shipment: Invoice arrives, in the form of an email attachment, Excel spreadsheet Monkey opens invoice, copy-pastes the relevant part of three columns into the relevant parts of a spreadsheet template, where extremely complex calculations happen, like =B2*550 Monkey sends this new spreadsheet to boss (email if lucky, printer otherwise), who sets the retail price Monkey opens the reply, then proceeds to input the data into the production database using a client program that is unusable on so many levels it's not even worth detailing Monkey fires up HyperTerminal, types in "AT", disconnect Monkey sends text messages and emails to customers using another part of the horrible client program, one at a time I want to change Monkey from myself to software wherever possible. I've never written anything that interfaces with email, Excel, databases or SMS before, but I'd be more than happy to learn if it saves me from this. Here's my uneducated wishlist: Monkey asks Thunderbird (mail server perhaps?) for the attachment Monkey tells Excel to dump the spreadsheet into a more Jurily-friendly format, like CSV or something Monkey parses the output, does the complex calculations // TODO: find a way to get the boss-generated prices with minimal manual labor involved Monkey connects to the database, inserts data Monkey spams costumers Is all this feasible? If yes, where do I start reading? How would you improve it? What language/framework do you think would be ideal for this? What would you do about the boss?

    Read the article

  • How do I produce a screenshot of a flash swf on the server?

    - by indignant
    I'm writing a flash app using the open source tools. I would like to load a data file in to the app and capture a screenshot of the stage on the server. The only part that seems mysterious is running the app on the server. In fact, I don't even care if it's the same app running on the server and in the browser--if I can use the flash stage and drawing routines to produce an image server-side, I'm happy. If I have to delve in to flex, fine. Right now I'm having problems finding any starting point at all. I gather Adobe has some commercial products that may fit the bill, but I'd like to stick with open source, apache, and linux. I know this is probably possible with haxe/neko, but I'd like to use more mainstream tools if possible. Am I asking too much? EDIT/CLARIFICATION: Many thanks for the responses so far, but I think I've been a bit muddy in my description. I've already written the actual stage-grabbing stuff using the same PNGEncoder class as was suggested. The problem is in actually running the swf on the server side. I don't want to let the client take the screen shot itself, because this opens up the possibility of the client maliciously submitting a screenshot which does not correspond to what is on the stage, that is, I don't want users uploading porn. If I could run the the actionscript code on the server, then I could generate the screenshot from my data files and be sure that the screenshot matches the data, but I have no idea how to run the actionscript or swf on the server.

    Read the article

  • JQuery: how to store records id for data from ajax query in dynamicly created html elements

    - by grapkulec
    Probably question title is rather cryptic but I will try to explain myself here so please bare with me :) Let's assume this configuration: server-side is PHP application responding for requests with data (list of items, single item details, etc.) in json format client-side is JQuery application sending ajax request to that PHP app and creating html content corresponding with received data So, for example: client requests "list of all animals with names staring with 'A'", gets the json response from server, and for every "animal" creates some html gizmo like div with animal description or something like that. It doesn't really matter what html element it will be but it has to point exactly to specific record by "containing" id of that record. And here is my dilemma: is it good solution to use "id" property for that? So it would be like: <div id="10" class="animal"> <p> This is animal of very mysterious kind... </p> </div> <div id="11" class="animal"> <p> And this one is very common to our country... </p> </div> where id="10" is of course indication that this is representation of record with id = 10. Or maybe I should store this record id in some custom made tag like <record_id>10</record_id> and leave an "id" strictly for what it was meant to be (css selector)? I need that record id for further stuff like updating database with some user input or deleting some of "animals" or creating new ones or anything that will be needed. All manipulations will be done with JQuery and ajax requests and responses will be visualized also with dynamic creation of html interface. I'm sure that somebody had to deal with that kind of stuff before so I would be grateful for some tips on that topic.

    Read the article

  • How to reliably identify users across Internet?

    - by amn
    I know this is a big one. In fact, it may be used for some SO community wiki. Anyways, I am running a website that DOES NOT use explicit authentication of users. It's public as in open to everybody. However, due to the nature of the service, some users need to be locked out due to misbehavior. I am currently blocking IP addresses, but I am aware of the supposed fact that many people purposefully reset their DHCP client cache to have their ISP assign them new addresses. Is that a fact? I think it certainly is a lucrative possibility for some people who want to circumvent being denied access. So IPs turn out to be a suboptimal way of dealing with this. But there is nothing else, is it? MAC addresses don't survive on WAN (change from hop to hop?), and even if they did - these can also be spoofed, although I think less easily than IP renewal. Cookies and even Flash cookies are out of the question, because there are tons of "tutorials" how to wipe these, and those intent on wreaking havoc on Internet are well aware and well equipped against such rudimentary measures I would employ. Is there anything else to lean on? I was thinking heuristical profiling - collecting available data from client-side and forming some key with it, but have not gone as far as to implementing it - is it an option?

    Read the article

  • Callback function and function pointer trouble in C++ for a BST

    - by Brendon C.
    I have to create a binary search tree which is templated and can deal with any data types, including abstract data types like objects. Since it is unknown what types of data an object might have and which data is going to be compared, the client side must create a comparison function and also a print function (because also not sure which data has to be printed). I have edited some C code which I was directed to and tried to template, but I cannot figure out how to configure the client display function. I suspect variable 'tree_node' of class BinarySearchTree has to be passed in, but I am not sure how to do this. For this program I'm creating an integer binary search tree and reading data from a file. Any help on the code or the problem would be greatly appreciated :) Main.cpp #include "BinarySearchTreeTemplated.h" #include <iostream> #include <fstream> #include <string> using namespace std; /*Comparison function*/ int cmp_fn(void *data1, void *data2) { if (*((int*)data1) > *((int*)data2)) return 1; else if (*((int*)data1) < *((int*)data2)) return -1; else return 0; } static void displayNode() //<--------NEED HELP HERE { if (node) cout << " " << *((int)node->data) } int main() { ifstream infile("rinput.txt"); BinarySearchTree<int> tree; while (true) { int tmp1; infile >> tmp1; if (infile.eof()) break; tree.insertRoot(tmp1); } return 0; } BinarySearchTree.h (a bit too big to format here) http://pastebin.com/4kSVrPhm

    Read the article

  • WSACONNREFUSED when connecting to server

    - by Robert Mason
    I'm currently working on a server. I know that the client side is working (I can connect to www.google.com on port 80, for example), but the server is not functioning correctly. The socket has socket()ed, bind()ed, and listen()ed successfully and is on an accept loop. The only problem is that accept() doesn't seem to work. netstat shows that the server connection is running fine, as it prints the PID of the server process as LISTENING on the correct port. However, accept never returns. Accept just keeps running, and running, and if i try to connect to the port on localhost, i get a 10061 WSACONNREFUSED. I tried looping the connection, and it just keeps refusing connections until i hit ctrl+c. I put a breakpoint directly after the call to accept(), and no matter how many times i try to connect to that port, the breakpoint never fires. Why is accept not accepting connections? Has anyone else had this problem before? Known: [breakpoint0] if ((new_fd = accept(sockint, NULL, NULL)) == -1) { throw netlib::error("Accept Error"); //netlib::error : public std::exception } else { [breakpoint1] code...; } breakpoint0 is reached (and then continued through), no exception is thrown, and breakpoint1 is never reached. The client code is proven to work. Netstat shows that the socket is listening. If it means anything, i'm connecting to 127.0.0.1 on port 5842 (random number). The server is configured to run on 5842, and netstat confirms that the port is correct.

    Read the article

  • convert jruby 1.8 string to windows encoding?

    - by Arne
    Hey, I want to export some data from my jruby on rails webapp to excel, so I create a csv string and send it as a download to the client using send_data(text, :filename => "file.csv", :type => "text/csv; charset=CP1252", :encoding => "CP1252") The file seems to be in UTF-8 which Excel cannot read correctly. I googled the problem and found that iconv can convert encodings. I try to do that with: ic = Iconv.new('CP1252', 'UTF-8') text = ic.iconv(text) but when I send the converted text it does not make any difference. It is still UTF-8 and Excel cannot read the special characters. there are several solutions using iconv, so this seems to work for others. When I convert the file on the linux shell manually with iconv it works. What am I doing wrong? Is there a better way? Im using: - jruby 1.3.1 (ruby 1.8.6p287) (2009-06-15 2fd6c3d) (Java HotSpot(TM) Client VM 1.6.0_19) [i386-java] - Debian Lenny - Glassfish app server - Iceweasel 3.0.6

    Read the article

  • Sending a JSON object to an ASP.NET web service using JQUERY ajax function

    - by uzay95
    I want to create object on the client side of aspx page. And i want to add functions to these javascript classes to make easier the life. Actually i can get and use the objects (derived from the server side classes) which returns from the services. When i wanted to send objects from the client by jquery ajax methods, i couldn't do it :) This is my javascript object: function ClassAndMark(_mark, _lesson){ this.Lesson = _lesson; this.Mark = _mark; } function Student(_name, _surname, _classAndMark){ this.Name = _name; this.SurName = _surname; this.ClassAndMark = _classAndMark; } JSClass.prototype.fSaveToDB(){ $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "/WS/SaveObject.asmx/fSaveToDB"), data: ????????????, dataType: "json" }); } Actually i don't know what should be definition of classes and methods on the Server side but i think: class ClassAndMark{ public string Lesson ; public string Mark ; } class Student{ public string Name ; public string SurName ; public ClassAndMark ClassAndMark ; } Web service is below but again i couldn't get what should be instead of the ???? : [WebMethod()] public void fSaveToDB(???? _obj) { // How can i convert input parameter/parameters // of method in the server side object? }

    Read the article

  • How do I serialize/deserialize a NHibernate entity that has references to other objects?

    - by Daniel T.
    I have two NHibernate-managed entities that have a bi-directional one-to-many relationship: public class Storage { public virtual string Name { get; set; } public virtual IList<Box> Boxes { get; set; } } public class Box { public virtual string Box { get; set; } [DoNotSerialize] public virtual Storage ParentStorage { get; set; } } A Storage can contain many Boxes, and a Box always belongs in a Storage. I want to edit a Box's name, so I send it to the client using JSON. Note that I don't serialize ParentStorage because I'm not changing which storage it's in. The client edits the name and sends the Box back as JSON. The server deserializes it back into a Box entity. Problem is, the ParentStorage property is null. When I try to save the Box to the database, it updates the name, but also removes the relationship to the Storage. How do I properly serialize and deserialize an entity like a Box, while keeping the JSON data size to a minimum?

    Read the article

  • Patterns / Solutions to complicated Feature Management

    - by yclian
    Hi all, My company develops CDN / Web-Hosting solution. We have a middleware that's served as a business logic layer and exposes web service for the front-end. I would like to seek for a clean solution to feature management - there're uncertainties and ugly workarounds/solutions in the software that the dev would say "when it happens or is broken, we will fix it". For example, here're the following features that a web publisher can have: Sites limit Bandwidth limit SSL feature + SSL configuration per site If we downgrade a web publisher, when he's having 10 sites, down to 5 sites, we can choose not to suspend the rest of the 5 sites, or we shall prompt for suspension before the downgrade. For the case of bandwidth limit, the downgrade is easy, when the bandwidth check happens, if the publisher has it exceeded, then we will suspend his account. For the case of SSL feature. Every SSL configuration is tied to a site, what shall happen to these configuration object when the SSL feature is downgraded from enabled to disabled? So as you can see, there're many different situations and there are different ways of handling it. I can make a system that examines the impacts and prompts the user to make changes before the downgrade/upgrade. Or a system that ignores the impacts and just upgrade/downgrade. Bad. Or a system designed in a way that the client code need to be aware of the complex feature matrix (or I can expose a helper to the client code to check if a feature is not DEFUNCT) There can be many ways that I am still thinking but puzzled. I am wondering, how would you tackle this issue and is there any recommended patterns or books or software that you think I can refer to? Appreciate your help.

    Read the article

  • WCF Multiple Service Configuration Issue

    - by Goober
    Scenario Ignoring that fact that some of the settings might be wrong and inconsistent. Why does the program fail to compile when I try and put these 2 separate configurations for WCF Services into the same APP.CONFIG file? One was writen by myself and another by a friend, yet I cannot get the application to compile. What have I missed? ERROR Type Initialization Exception CODE <configuration> <system.serviceModel> <!--START Service 1 CONFIGURATION--> <bindings> <netTcpBinding> <binding name="tcpServiceEndPoint" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:05:00" enabled="true" /> <security mode="None"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint address="" binding="netTcpBinding" bindingConfiguration="tcpServiceEndPoint" contract="ListenerService.IListenerService" name="tcpServiceEndPoint" /> </client> <!--END Service 1 CONFIGURATION-->

    Read the article

  • How can I eliminate latency in quicktime streamed video

    - by JJFeiler
    I'm prototyping a client that displays streaming video from a HaiVision Barracuda through a quicktime client. I've been unable to reduce the buffer size below 3.0 seconds... for this application, we need as low a latency as the network allows, and prefer video dropouts to delay. I'm doing the following: - (void)applicationDidFinishLaunching:(NSNotification *)aNotification { NSString *path = [[NSBundle mainBundle] pathForResource:@"haivision" ofType:@"sdp"]; NSError *error = nil; QTMovie *qtmovie = [QTMovie movieWithFile:path error:&error]; if( error != nil ) { NSLog(@"error: %@", [error localizedDescription]); } Movie movie = [qtmovie quickTimeMovie]; long trackCount = GetMovieTrackCount(movie); Track theTrack = GetMovieTrack(movie,1); Media theMedia = GetTrackMedia(theTrack); MediaHandler theMediaHandler = GetMediaHandler(theMedia); QTSMediaPresentationParams myPres; ComponentResult c = QTSMediaGetIndStreamInfo(theMediaHandler, 1,kQTSMediaPresentationInfo, &myPres); Fixed shortdelay = 1<<15; OSErr theErr = QTSPresSetInfo (myPres.presentationID, kQTSAllStreams, kQTSTargetBufferDurationInfo, &shortdelay ); NSLog(@"OSErr %d", theErr); [movieView setMovie:qtmovie]; [movieView play:self]; } I seem to be getting valid objects/structures all the way down to the QTSPres, though the ComponentResult and OSErr are both returning -50. The streaming video plays fine, but the buffer is still 3.0seconds. Any help/insight appreciated. J

    Read the article

  • IIS7 dynamic content compression and webservices

    - by vandalo
    I am moving and old asmx webservice to a new server with IIS7. This webservice basically sends a big dataset (10mb+) to a winform application. The old solution was implemented using a custom soap extension which compressed the content before sending the stream to the client. The client, of course, implemented the same custom soap extension, to decompressed the stream in a dataset. Everything has worked pretty well for years. My customer doesn't want to change the code upgrading to WCF. They just want to put the old App on the new server and use the new dynamic content compression features. We're testing things on a test server (win serv 2008) and it seems that it's working pretty well, even if it seems slow: we can't see any difference in performance (speed) between the uncompressed and compressed stream. Here's the question. Where should I put the settings? Most people say I can't put it in my web.config; others say it can be put there. I am a bit confused. Are there any tricks or things I should know? What about mimeTypes? Should I set some parameters, somewhere? ... considering my stream is XML (dataset) ?? Thanks to everyone who would like to help Alberto

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >