Search Results

Search found 28160 results on 1127 pages for 'rich client platform'.

Page 6/1127 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • How to create a link to Nintex Start Workflow Page in the document set home page

    - by ybbest
    In this blog post, I’d like to show you how to create a link to start Nintex Workflow Page in the document set home page. 1. Firstly, you need to upload the latest version of jQuery to the style library of your team site. 2. Then, upload a text file to the style library for writing your own html and JavaScript 3. In the document set home page, insert a new content editor web part and link the text file you just upload. 4. Update the text file with the following content, you can download this file here. <script type="text/javascript" src="/Style%20Library/jquery-1.9.0.min.js"></script> <script type="text/javascript" src="/_layouts/sp.js"></script> <script type="text/javascript"> $(document).ready(function() { listItemId=getParameterByName("ID"); setTheWorkflowLink("YBBESTDocumentLibrary"); }); function buildWorkflowLink(webRelativeUrl,listId,itemId) { var workflowLink =webRelativeUrl+"_layouts/NintexWorkflow/StartWorkflow.aspx?list="+listId+"&ID="+itemId+"&WorkflowName=Start Approval"; return workflowLink; } function getParameterByName(name) { name = name.replace(/[\[]/, "\\\[").replace(/[\]]/, "\\\]"); var regexS = "[\\?&]" + name + "=([^&#]*)"; var regex = new RegExp(regexS); var results = regex.exec(window.location.search); if(results == null){ return ""; } else{ return decodeURIComponent(results[1].replace(/\+/g, " ")); } } function setTheWorkflowLink(listName) { var SPContext = new SP.ClientContext.get_current(); web = SPContext.get_web(); list = web.get_lists().getByTitle(listName); SPContext.load(web,"ServerRelativeUrl"); SPContext.load(list, 'Title', 'Id'); SPContext.executeQueryAsync(setTheWorkflowLink_Success, setTheWorkflowLink_Fail); } function setTheWorkflowLink_Success(sender, args) { var listId = list.get_id(); var listTitle = list.get_title(); var webRelativeUrl = web.get_serverRelativeUrl(); var startWorkflowLink=buildWorkflowLink(webRelativeUrl,listId,listItemId) $("a#submitLink").attr('href',startWorkflowLink); } function setTheWorkflowLink_Fail(sender, args) { alert("There is a problem setting up the submit exam approval link"); } </script> <a href="" target="_blank" id="submitLink"><span style="font-size:14pt">Start the approval process.</span></a> 5. Save your changes and go to the document set Item, you will see the link is on the home page now. Notes: 1. You can create a link to start the workflow using the following build dynamic string configuration: {Common:WebUrl}/_layouts/NintexWorkflow/StartWorkflow.aspx?list={Common:ListID}&ID={ItemProperty:ID}&WorkflowName=workflowname. With this link you will still need to click the start button, this is standard SharePoint behaviour and cannot be altered. References: http://connect.nintex.com/forums/27143/ShowThread.aspx How to use html and JavaScript in Content Editor web part in SharePoint2010

    Read the article

  • rich snippets ignored by google [closed]

    - by Thoir Fáidh
    Possible Duplicate: Why would Google Rich Snippets work for one site author but not another? I'm facing one problem here. I made rich snippets - microdata for the website but google ignores all of them. Here is how it looks like in testing tool . It doesn't detect any errors. I've read that google ignores the microdata in hidden fields. Unfortunately this is partially the case since I use jquery to interact with the contect, but nevertheless it is not hidden everywhere and I believe that google should recognize at least the microdata visible to the user permanently. Am I missing something here? It is now about 3 weeks since I updated website with rich snippets.

    Read the article

  • Cross platform mobile development VS Native Mobile Development: Present And Future.

    - by MobileDev123
    I just completed one year in Smart phone development, working on BlackBerry and Android and also developed one application exclusively targeted to nokia feature phones. And just a month ago I come to know about Titanium Appcelerator tool that enables cross platform development, but there are some developers who complain about it's sub-par functionalities. Even a little bit experience of mine says that developing in native environment rather than these cross platform tools will give you more advantages by giving a developer a chance to add more features with better performance. Do you have same experience? Or you find such cross development tools really useful regarding to advance functionality and performance? As porting (or co developing) same application to different mobile platform is common thing nowadays, what do you think will these cross platform tools evolve and force developers to get a hands on approach on them or majority will stick to the native development environment?

    Read the article

  • Unity in C# for Platform Specific Implementations

    - by DxCK
    My program has heavy interaction with the operating system through Win32API functions. now i want to migrate my program to run under Mono under Linux (No wine), and this requires different implementations to the interaction with the operating system. I started designing a code that can have different implementation for difference platform and is extensible for new future platforms. public interface ISomeInterface { void SomePlatformSpecificOperation(); } [PlatformSpecific(PlatformID.Unix)] public class SomeImplementation : ISomeInterface { #region ISomeInterface Members public void SomePlatformSpecificOperation() { Console.WriteLine("From SomeImplementation"); } #endregion } public class PlatformSpecificAttribute : Attribute { private PlatformID _platform; public PlatformSpecificAttribute(PlatformID platform) { _platform = platform; } public PlatformID Platform { get { return _platform; } } } public static class PlatformSpecificUtils { public static IEnumerable<Type> GetImplementationTypes<T>() { foreach (Assembly assembly in AppDomain.CurrentDomain.GetAssemblies()) { foreach (Type type in assembly.GetTypes()) { if (typeof(T).IsAssignableFrom(type) && type != typeof(T) && IsPlatformMatch(type)) { yield return type; } } } } private static bool IsPlatformMatch(Type type) { return GetPlatforms(type).Any(platform => platform == Environment.OSVersion.Platform); } private static IEnumerable<PlatformID> GetPlatforms(Type type) { return type.GetCustomAttributes(typeof(PlatformSpecificAttribute), false) .Select(obj => ((PlatformSpecificAttribute)obj).Platform); } } class Program { static void Main(string[] args) { Type first = PlatformSpecificUtils.GetImplementationTypes<ISomeInterface>().FirstOrDefault(); } } I see two problems with this design: I can't force the implementations of ISomeInterface to have a PlatformSpecificAttribute. Multiple implementations can be marked with the same PlatformID, and i dont know witch to use in the Main. Using the first one is ummm ugly. How to solve those problems? Can you suggest another design?

    Read the article

  • openerp client customization

    - by iamgopal
    openerp client seems to be nice and working , i would like to hack it and use it as a front end to my open erp solution. but the documentation regarding client side design or customization is poor on openerp site , is there any good reference or documentation available for further digging in to openerp client side coding ? or more : if any similar client solution available that can be plug in to any back end system. ( i.e. rich internet client )

    Read the article

  • Citrix client slow to launch

    - by user706837
    Was wondering if anyone else experience Citrix client to launch very slowly. While I'm a Windows SA by trade, I consider myself Novice+ on Linux, but I doubt thats the problem. This is the simple scenario: 1. Login to Citrix server to work from home 2. Click on the published application; this typically starts the local Citrix client. 3. Citrix client should start and you're off. Problem is between #2 and #3 I click on the application and 8 out of 9 times there is a 60 second delay and then I get an SSL connection error. I suspect this error is misleading since the connection took too long to open. But I dont know how to prove it (or fix it). I'm able to successfully manually launch wfcmgr without errors; so this leads me to believe Citrix client is installed correctly. I even leave it running thinking this may help, but I don't see a difference with or without this running first. The only times I'm able to connect successfully is when the Citrix client starts up a few seconds after clicking on the application. I've searched online for articles that might help, but tried a number of fixes without much difference. Even tried "ln -sf /dev/urandom /dev/random" as suggested by this article, but no dice:http://forums.citrix.com/message.jspa?messageID=1381276 My System (specs that may be relevant) Sony VAIO Laptop VGN-NW270F Linux Mint 11.04 Problem using: FireFox and Chrome Any help would be appreciated. Just trying to either find an answer or guidance on how to determine why its taking so long to launch the Citrix Client. Thanks

    Read the article

  • Regular Windows 7 BSOD with Shrew VPN client

    - by Junto
    The Shrew VPN client appears to be a good alternative to the Cisco VPN Client on x64 Windows 7. However, since installing it I've seen fairly regular BSODs. Minidump attached: Microsoft (R) Windows Debugger Version 6.11.0001.404 AMD64 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [D:\Temp\020810-23431-01.dmp] Mini Kernel Dump File: Only registers and stack trace are available Symbol search path is: SRV*d:\symbols*http://msdl.microsoft.com/download/symbols Executable search path is: Windows 7 Kernel Version 7600 MP (8 procs) Free x64 Product: WinNt, suite: TerminalServer SingleUserTS Built by: 7600.16385.amd64fre.win7_rtm.090713-1255 Machine Name: Kernel base = 0xfffff800`0285f000 PsLoadedModuleList = 0xfffff800`02a9ce50 Debug session time: Mon Feb 8 18:08:12.887 2010 (GMT+1) System Uptime: 0 days 7:52:06.120 Loading Kernel Symbols ............................................................... ................................................................ .............. Loading User Symbols Loading unloaded module list .... ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck A, {0, 2, 0, fffff800028d50b6} Unable to load image \SystemRoot\system32\DRIVERS\vfilter.sys, Win32 error 0n2 *** WARNING: Unable to verify timestamp for vfilter.sys *** ERROR: Module load completed but symbols could not be loaded for vfilter.sys Probably caused by : vfilter.sys ( vfilter+29a6 ) Followup: MachineOwner Machine is a brand spanking new Dell Precision T5500. Superuser appears to have several recommendations for the Shrew VPN Client as an alternative to the Cisco VPN client on 64 bit machines, so I wondered if anyone here has seen this problem and possibly found a solution to the problem? I've decided to run the VPN client under Windows 7 compatibility mode for the moment (Vista SP2) with administrator privileges to see if it makes a difference. Oddly, the VPN doesn't necessarily need to be connected. I've noticed it when browsing using Google Chrome and Internet Explorer, usually when I open up a new tab. If it carries on I think I'll be forced to shell out the 120 EUR for the NCP Client instead.

    Read the article

  • PPTP ping client to client error

    - by Linux Intel
    I installed pptp server on a centos 6 64bit server PPTP Server ip : 55.66.77.10 PPTP Local ip : 10.0.0.1 Client1 IP : 10.0.0.60 centos 5 64bit Client2 IP : 10.0.0.61 centos5 64bit PPTP Server can ping Client1 And client 1 can ping PPTP Server PPTP Server can ping Client2 And client 2 can ping PPTP Server The problem is client 1 can not ping Client 2 route -n on PPTP Server Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.60 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 10.0.0.61 0.0.0.0 255.255.255.255 UH 0 0 0 ppp1 55.66.77.10 0.0.0.0 255.255.255.248 U 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 55.66.77.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 1 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 70.14.13.19 255.255.255.255 UGH 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth1 0.0.0.0 70.14.13.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 2 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 84.56.120.60 255.255.255.255 UGH 0 0 0 eth1 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 84.56.120.60 0.0.0.0 UG 0 0 0 eth1 cat /etc/ppp/options.pptpd on PPTP server ############################################################################### # $Id: options.pptpd,v 1.11 2005/12/29 01:21:09 quozl Exp $ # # Sample Poptop PPP options file /etc/ppp/options.pptpd # Options used by PPP when a connection arrives from a client. # This file is pointed to by /etc/pptpd.conf option keyword. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 and the kernel MPPE module. ############################################################################### # Authentication # Name of the local system for authentication purposes # (must match the second field in /etc/ppp/chap-secrets entries) name pptpd # Strip the domain prefix from the username before authentication. # (applies if you use pppd with chapms-strip-domain patch) #chapms-strip-domain # Encryption # (There have been multiple versions of PPP with encryption support, # choose with of the following sections you will use.) # BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o # {{{ refuse-pap refuse-chap refuse-mschap # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. require-mschap-v2 # Require MPPE 128-bit encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) require-mppe-128 # }}} # OpenSSL licensed ppp-2.4.1 fork with MPPE only, kernel module mppe.o # {{{ #-chap #-chapms # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. #+chapms-v2 # Require MPPE encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) #mppe-40 # enable either 40-bit or 128-bit, not both #mppe-128 #mppe-stateless # }}} # Network and Routing # If pppd is acting as a server for Microsoft Windows clients, this # option allows pppd to supply one or two DNS (Domain Name Server) # addresses to the clients. The first instance of this option # specifies the primary DNS address; the second instance (if given) # specifies the secondary DNS address. #ms-dns 10.0.0.1 #ms-dns 10.0.0.2 # If pppd is acting as a server for Microsoft Windows or "Samba" # clients, this option allows pppd to supply one or two WINS (Windows # Internet Name Services) server addresses to the clients. The first # instance of this option specifies the primary WINS address; the # second instance (if given) specifies the secondary WINS address. #ms-wins 10.0.0.3 #ms-wins 10.0.0.4 # Add an entry to this system's ARP [Address Resolution Protocol] # table with the IP address of the peer and the Ethernet address of this # system. This will have the effect of making the peer appear to other # systems to be on the local ethernet. # (you do not need this if your PPTP server is responsible for routing # packets to the clients -- James Cameron) proxyarp # Normally pptpd passes the IP address to pppd, but if pptpd has been # given the delegate option in pptpd.conf or the --delegate command line # option, then pppd will use chap-secrets or radius to allocate the # client IP address. The default local IP address used at the server # end is often the same as the address of the server. To override this, # specify the local IP address here. # (you must not use this unless you have used the delegate option) #10.8.0.100 # Logging # Enable connection debugging facilities. # (see your syslog configuration for where pppd sends to) debug # Print out all the option values which have been set. # (often requested by mailing list to verify options) #dump # Miscellaneous # Create a UUCP-style lock file for the pseudo-tty to ensure exclusive # access. lock # Disable BSD-Compress compression nobsdcomp # Disable Van Jacobson compression # (needed on some networks with Windows 9x/ME/XP clients, see posting to # poptop-server on 14th April 2005 by Pawel Pokrywka and followups, # http://marc.theaimsgroup.com/?t=111343175400006&r=1&w=2 ) novj novjccomp # turn off logging to stderr, since this may be redirected to pptpd, # which may trigger a loopback nologfd # put plugins here # (putting them higher up may cause them to sent messages to the pty) cat /etc/ppp/options.pptp on Client1 and Client2 ############################################################################### # $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $ # # Sample PPTP PPP options file /etc/ppp/options.pptp # Options used by PPP when a connection is made by a PPTP client. # This file can be referred to by an /etc/ppp/peers file for the tunnel. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/ # and the kernel MPPE module available from the CVS repository also on # http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe. ############################################################################### # Lock the port lock # Authentication # We don't need the tunnel server to authenticate itself noauth # We won't do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2 # (you may need to remove these refusals if the server is not using MPPE) refuse-pap refuse-eap refuse-chap refuse-mschap # Compression # Turn off compression protocols we know won't be used nobsdcomp nodeflate # Encryption # (There have been multiple versions of PPP with encryption support, # choose which of the following sections you will use. Note that MPPE # requires the use of MSCHAP-V2 during authentication) # # Note that using PPTP with MPPE and MSCHAP-V2 should be considered # insecure: # http://marc.info/?l=pptpclient-devel&m=134372640219039&w=2 # https://github.com/moxie0/chapcrack/blob/master/README.md # http://technet.microsoft.com/en-us/security/advisory/2743314 # http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras # ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o # If the kernel is booted in FIPS mode (fips=1), the ppp_mppe.ko module # is not allowed and PPTP-MPPE is not available. # {{{ # Require MPPE 128-bit encryption #require-mppe-128 # }}} # http://mppe-mppc.alphacron.de/ fork from PPP project by Jan Dubiec # ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o # {{{ # Require MPPE 128-bit encryption #mppe required,stateless # }}} IPtables are stopped on clients and server, Also net.ipv4.ip_forward = 1 is enabled on PPTP Server. How can i solve this problem .?

    Read the article

  • Access VirtualBox client (WinXP) from host (Linux) when client is connected to VPN

    - by hsz
    Hello ! I have a host (Ubuntu Linux) with VirtualBox on which is installed client (Windows XP). I set bridge connection for them. Host has IP 192.168.0.102 and client 192.168.0.103. On client I've installed WAMP server and on host I can access it by simply call 192.168.0.103. When I connect on client to the Cisco VPN (need access to database over VPN) I cannot access that server from host. What should I do to make it work ?

    Read the article

  • Rich Text Editor in javascript

    - by chanthou
    iframe .text-bold{ border:1px solid orange; background-color:#ccc; width:16px; height:16px; font-weight:bold; cursor:pointer; } .active{ border-color:#9DAECD #E8F1FF #E8F1FF #9DAECD; background-color:yellow; } function init( ) { iframe = document.createElement("iframe"); document.body.appendChild(iframe); iframe.onload = setIframeEditable; isBold=false; div=document.getElementById("bold"); } var setIframeEditable = function(){ iframe.contentDocument.designMode='on'; iframe.focus(); } function makeBold(){ if(!isBold){ //console.log(iframe.contentDocument.execCommand("bold", false, null)); iframe.contentDocument.execCommand("bold", false, null); div.className += " active"; isBold=true; iframe.focus(); }else{ //console.log(iframe.contentDocument.execCommand("bold", true, null)); iframe.contentDocument.execCommand("bold", false, null); div.className ="text-bold"; isBold=false iframe.focus(); } } </script> </head> <body onload="init()"> <div id="bold" class="text-bold" onclick="makeBold()">B</div> </body>

    Read the article

  • How do I configure a C# web service client to send HTTP request header and body in parallel?

    - by Christopher
    Hi, I am using a traditional C# web service client generated in VS2008 .Net 3.5, inheriting from SoapHttpClientProtocol. This is connecting to a remote web service written in Java. All configuration is done in code during client initialization, and can be seen below: ServicePointManager.Expect100Continue = false; ServicePointManager.DefaultConnectionLimit = 10; var client = new APIService { EnableDecompression = true, Url = _url + "?guid=" + Guid.NewGuid(), Credentials = new NetworkCredential(user, password, null), PreAuthenticate = true, Timeout = 5000 // 5 sec }; It all works fine, but the time taken to execute the simplest method call is almost double the network ping time. Whereas a Java test client takes roughly the same as the network ping time: C# client ~ 550ms Java client ~ 340ms Network ping ~ 300ms After analyzing the TCP traffic for a session discovered the following: Basically, the C# client sent TCP packets in the following sequence. Client Send HTTP Headers in one packet. Client Waits For TCP ACK from server. Client Sends HTTP Body in one packet. Client Waits For TCP ACK from server. The Java client sent TCP packets in the following sequence. Client Sends HTTP Headers in one packet. Client Sends HTTP Body in one packet. Client Revieves ACK for first packet. Client Revieves ACK for second packet. Client Revieves ACK for second packet. Is there anyway to configure the C# web service client to send the header/body in parallel as the Java client appears to? Any help or pointers much appreciated.

    Read the article

  • Refernce platform specific System.Data.SQLite

    - by Dmitriy Nagirnyak
    Hi, I am using SQLite for the unit testing and might use it as a database for local development/staging. The System.Data.SQLite has basically 2 versions: x86 and x64. Correct one should be used for the specific platform. I have 64 bit Win7, other guys in the team might use 32-bit OSs. The server's platform is not known at this stage. If I use 32-bit version of the assembly on 64-bit platform I get BadImageFormatException: Could not load file or assembly 'System.Data.SQLite'. I believe similar will happen trying to use 64-bit assembly on 32-bit platform. So my question is what is the best way to reference the SQLite assembly so that it does not depend on the platform and people can just use it? It is ok to use 32-bit version of assembly on a 64-bit platform (Maybe there is a switch for that somewhere?). Thanks, Dmitriy.

    Read the article

  • recommendations for rich scalable internet application

    - by Wouter Roux
    I am planning to develop a web application that must perform the following actions: User authentication allow authenticated user to download data from USB device (roughly 5 meg data) upload the data from the USB device to processing server process the data and display the results to the user further requirements/restrictions: the USB driver supports windows (2000, XP, Vista, 7) the application must support IE, Firefox and Chrome the USB driver must be installed when pointing to the web application the first time the USB driver exposes functionality through exported functions in dll scalability and eye candy is important server details: Windows Server 2008 Enterprise x64, IIS 7.0, SQL server 2008 With the limited details specified: What technology would you recommend? (eg silverlight/asp.net mvc/wcf). What practices/patterns/3rd party controls would you recommend?

    Read the article

  • Reference platform specific System.Data.SQLite

    - by Dmitriy Nagirnyak
    I am using SQLite for the unit testing and might use it as a database for local development/staging. The System.Data.SQLite has basically 2 versions: x86 and x64. Correct one should be used for the specific platform. I have 64 bit Win7, other guys in the team might use 32-bit OSs. The server's platform is not known at this stage. If I use 32-bit version of the assembly on 64-bit platform I get BadImageFormatException: Could not load file or assembly 'System.Data.SQLite'. I believe similar will happen trying to use 64-bit assembly on 32-bit platform. So my question is what is the best way to reference the SQLite assembly so that it does not depend on the platform and people can just use it? It is ok to use 32-bit version of assembly on a 64-bit platform (Maybe there is a switch for that somewhere?).

    Read the article

  • Proxied calls not working as expected

    - by AndyH
    I have been modifying an application to have a cleaner client/server split to allow for load splitting and resource sharing etc. Everything is written to an interface so it was easy to add a remoting layer to the interface using a proxy. Everything worked fine. The next phase was to add a caching layer to the interface and again this worked fine and speed was improved but not as much as I would have expected. On inspection it became very clear what was going on. I feel sure that this behavior has been seen many times before and there is probably a design pattern to solve the problem but it eludes me and I'm not even sure how to describe it. It is easiest explained with an example. Let's imagine the interface is interface IMyCode { List<IThing> getLots( List<String> ); IThing getOne( String id ); } The getLots() method calls getOne() and fills up the list before returning. The interface is implemented at the client which is proxied to a remoting client which then calls the remoting server which in turn calls the implementation at the server. At the client and the server layers there is also a cache. So we have :- Client interface | Client cache | Remote client | Remote server | Server cache | Server interface If we call getOne("A") at the client interface, the call is passed to the client cache which faults. This then calls the remote client which passes the call to the remote server. This then calls the server cache which also faults and so the call is eventually passed to the server interface which actually gets the IThing. In turn the server cache is filled and finally the client cache also. If getOne("A") is again called at the client interface the client cache has the data and it gets returned immediately. If a second client called getOne("B") it would fill the server cache with "B" as well as it's own client cache. Then, when the first client calls getOne("B") the client cache faults but the server cache has the data. This is all as one would expect and works well. Now lets call getLots( [ "C", "D" ] ). This works as you would expect by calling getOne() twice but there is a subtlety here. The call to getLots() cannot directly make use of the cache. Therefore the sequence is to call the client interface which in turn calls the remote client, then the remote server and eventually the server interface. This then calls getOne() to fill the list before returning. The problem is that the getOne() calls are being satisfied at the server when ideally they should be satisfied at the client. If you imagine that the client/server link is really slow then it becomes clear why the client call is more efficient than the server call once the client cache has the data. This example is contrived to illustrate the point. The more general problem is that you cannot just keep adding proxied layers to an interface and expect it to work as you would imagine. As soon as the call goes 'through' the proxy any subsequent calls are on the proxied side rather than 'self' side. Have I failed to learn or not learned something correctly? All this is implemented in Java and I haven't used EJBs. It seems that the example may be confusing. The problem is nothing to do with cache efficiencies. It is more to do with an illusion created by the use of proxies or AOP techniques in general. When you have an object whose class implements an interface there is an assumption that a call on that object might make further calls on that same object. For example, public String getInternalString() { return InetAddress.getLocalHost().toString(); } public String getString() { return getInternalString(); } If you get an object and call getString() the result depends where the code is running. If you add a remoting proxy to the class then the result could be different for calls to getString() and getInternalString() on the same object. This is because the initial call gets 'deproxied' before the actual method is called. I find this not only confusing but I wonder how I can control this behavior especially as the use of the proxy may be by a third party. The concept is fine but the practice is certainly not what I expected. Have I missed the point somewhere?

    Read the article

  • Node.js Lockstep Multiplayer Architecture

    - by Wakaka
    Background I'm using the lockstep model for a multiplayer Node.js/Socket.IO game in a client-server architecture. User input (mouse or keypress) is parsed into commands like 'attack' and 'move' on the client, which are sent to the server and scheduled to be executed on a certain tick. This is in contrast to sending state data to clients, which I don't wish to use due to bandwidth issues. Each tick, the server will send the list of commands on that tick (possibly empty) to each client. The server and all clients will then process the commands and simulate that tick in exactly the same way. With Node.js this is actually quite simple due to possibility of code sharing between server and client. I'll just put the deterministic simulator in the /shared folder which can be run by both server and client. The server simulation is required so that there is an authoritative version of the simulation which clients cannot alter. Problem Now, the game has many entity classes, like Unit, Item, Tree etc. Entities are created in the simulator. However, for each class, it has some methods that are shared and some that are client-specific. For instance, the Unit class has addHp method which is shared. It also has methods like getSprite (gets the image of the entity), isVisible (checks if unit can be seen by the client), onDeathInClient (does a bunch of stuff when it dies only on the client like adding announcements) and isMyUnit (quick function to check if the client owns the unit). Up till now, I have been piling all the client functions into the shared Unit class, and adding a this.game.isServer() check when necessary. For instance, when the unit dies, it will call if (!this.game.isServer()) { this.onDeathInClient(); }. This approach has worked pretty fine so far, in terms of functionality. But as the codebase grew bigger, this style of coding seems a little strange. Firstly, the client code is clearly not shared, and yet is placed under the /shared folder. Secondly, client-specific variables for each entity are also instantiated on the server entity (like unit.sprite) and can run into problems when the server cannot instantiate the variable (it doesn't have Image class like on browsers). So my question is, is there a better way to organize the client code, or is this a common way of doing things for lockstep multiplayer games? I can think of a possible workaround, but it does have its own problems. Possible workaround (with problems) I could use Javascript mixins that are only added when in a browser. Thus, in the /shared/unit.js file in the /shared folder, I would have this code at the end: if (typeof exports !== 'undefined') module.exports = Unit; else mixin(Unit, LocalUnit); Then I would have /client/localunit.js store an object LocalUnit of client-side methods for Unit. Now, I already have a publish-subscribe system in place for events in the simulator. To remove the this.game.isServer() checks, I could publish entity-specific events whenever I want the client to do something. For instance, I would do this.publish('Death') in /shared/unit.js and do this.subscribe('Death', this.onDeathInClient) in /client/localunit.js. But this would make the simulator's event listeners list on the server and the client different. Now if I want to clear all subscribed events only from the shared simulator, I can't. Of course, it is possible to create two event subscription systems - one client-specific and one shared - but now the publish() method would have to do if (!this.game.isServer()) { this.publishOnClient(event); }. All in all, the workaround off the top of my head seems pretty complicated for something as simple as separating the client and shared code. Thus, I wonder if there is an established and simpler method for better code organization, hopefully specific to Node.js games.

    Read the article

  • Do you charge a client for email and chat communication as a freelancer? [closed]

    - by skyork
    For a project that is billed by hours, should a freelancer charge the client for the amount of time he/she spends on email/chat correspondence? For example, the client sends an email to the the freelancer, outlining the requirements. Should the freelancer charge the client for the time during which he/she reads the email and writes a reply. The same goes for chat conversations for clarifying the requirements. In particular, if the freelancer's English is not very good, so that he/she spends extra time on understanding what the client wants and explaining him/herself (e.g. copying and pasting into Google Translate), should such time be charged to the client too?

    Read the article

  • Webcast Q&A: Cisco's Platform Approach to Identity Management

    - by Tanu Sood
    Thanks to all who attended the live webcast we hosted on Cisco: Best Practices for a Platform Approach on Wed, March 14th. Those of you who couldn’t join us, the webcast replay is now available. Many thanks to our guest speaker, Ranjan Jain, Security Architect at Cisco for walking us through Cisco’s drivers and rationale for the platform approach, the implementation strategy, results, roadmap and recommendations. We greatly appreciate the insight he shared with us all on the deployment synergies with a platform approach to Identity Management. A forward looking organization, Cisco also has plans for secure cloud and mobile access enablement so it was interesting to learn how the Platform approach to Identity Management today is laying down the foundation for those future initiatives. While we tackled a good few questions during the webcast, we have captured the responses to those that we weren’t able to get to: Q.Can you provide insight into how you approached developing profiles for each user groupA. At Cisco, the user profile was already available to IT before the platform consolidation started. There is a dedicated business team that manages the user profiles. Q. What is the current version of Oracle Identity Manager in the market?A. Oracle Identity Manager 11gR1 is the latest version of our industry leading user provisioning/identity administration solution. Q. Is data resource segmentation part of the overall strategy at Cisco?A. It is but it is managed by the business teams and not at the IT level. Q. Does Cisco also have an Active Directoy LDAP? Do they sync AD from OID or do the provision to AD as another resource?[A. Yes, we do. AD is provisioned using in-house tools and not via Oracle Identity Manager (OIM). Q. If we already have a point IDM solution in place (SSO), can the platform approach still work?A. Yes, the platform approach calls for a seamless, standardized framework for identity management to support the enterprise’s entire infrastructure, both on-premise or in the cloud. Oracle Identity Management solutions are standards based so they can easily integrate and interoperate with existing Oracle or non-Oracle solutions. Hope you enjoyed the webcast and we look forward to having you join us for the next webcast in our Customers Talk: Identity as a Platform webcast series:ING: Scaling Role Management and Access Certification to Thousands of ApplicationsWednesday, April 11th at 10 am PST/ 1 pm ESTRegister Today We are also hosting a live event series in collaboration with the Aberdeen Group. To hear first-hand, the insights from the recently released Aberdeen Report and to discuss the merits of the Platform approach, do join us at this event. You can also connect with Oracle Identity Management SMEs and get your questions answered live. Aberdeen Group Live Event Series: IAM Integrated - Analyzing the "Platform" vs. "Point Solution" ApproachNorth America, April 10 - May 22Register for an event near you And here’s the slide deck from our Cisco webcast:   Oracle_Cisco identity platform approach_webcast View more presentations from OracleIDM

    Read the article

  • How to HIDE "client denied by server configuration:" error in log

    - by Keith
    I want to block access to my web server by default as a precaution but I keep getting the following errors showing up in my error log. [Wed Jun 27 23:30:54 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Edu.jar [Wed Jun 27 23:32:40 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/REST.jar [Wed Jun 27 23:35:39 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Set.jar [Thu Jun 28 01:01:17 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxyheader.php [Thu Jun 28 02:34:57 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxy.php [Thu Jun 28 05:41:33 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxyheader.php [Thu Jun 28 06:55:10 2012] [error] [client 180.76.6.20] client denied by server configuration: /home/www/default/ [Thu Jun 28 07:31:26 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Edu.jar [Thu Jun 28 07:32:25 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/REST.jar [Thu Jun 28 07:36:10 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Set.jar I don't really want these errors to show up but whatever I do, I can't get rid of them. Does anyone know how I can achieve this? Here is a copy of my configuration. <VirtualHost *:80> DocumentRoot /home/www/default <Directory /> AllowOverride None Order Deny,Allow Deny from all </Directory> #ErrorLog /var/log/apache2/error.log #LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost>

    Read the article

  • configuring linux console email client to check attachments

    - by Christopher
    I need to configure a IMAP4 capable (console-based) email client to - check and edit the name of an attachment ("contains umlauts?" - change character ä to ae) - delete emails that don't fit certain requirements (not PDF, DOC,... not from domain xyz.com) Whether the client can do everything by itself or can just trigger a script on incoming mail doesn't matter. Anyone have an idea with mail client would be suitable for such a task?

    Read the article

  • Free tools for analyzing client-server communication issues

    - by roberto
    Hi, we have some issues with a client-server based application and we would like to better understand client-server communication without going to the software company that sold the application. At least we would like to perform the analysis in parallel. Can you suggest to me a dummy proof application that we can easily get and install to analyze client-server traffic? Many thanks!

    Read the article

  • Free tools for analizing client-server comunication issues

    - by roberto
    Hi, we have some issues with a client-server based application and we would like to better understand client-server comunication without going to the software company that sold the application. At least we would like to perform the analysis in parallel. Can you suggest to me a dummy proof application that we can easily get and install to analise client-server traffic? Many thanks!

    Read the article

  • Web Servicet Client in JBOSS 5.1 with JDK6

    - by dcp
    This is a continuation of the question here: http://stackoverflow.com/questions/2435286/jboss-does-app-have-to-be-compiled-under-same-jdk-as-jboss-is-running-under It's different enough though that it required a new question. I am trying to use jdk6 to run JBOSS 5.1, and I downloaded the JDK6 version of JBOSS 5.1. This works fine and my EAR application deploys fine. However, when I want to run a web service client with code like this: public static void main(String[] args) throws Exception { System.out.println("creating the web service client..."); TestClient client = new TestClient("http://localhost:8080/tc_test_project-tc_test_project/TestBean?wsdl"); Test service = client.getTestPort(); System.out.println("calling service.retrieveAll() using the service client"); List<TestEntity> list = service.retrieveAll(); System.out.println("the number of elements in list retrieved using the client is " + list.size()); } I get the following exception: javax.xml.ws.WebServiceException: java.lang.UnsupportedOperationException: setProperty must be overridden by all subclasses of SOAPMessage at org.jboss.ws.core.jaxws.client.ClientImpl.handleRemoteException(ClientImpl.java:396) at org.jboss.ws.core.jaxws.client.ClientImpl.invoke(ClientImpl.java:302) at org.jboss.ws.core.jaxws.client.ClientProxy.invoke(ClientProxy.java:170) at org.jboss.ws.core.jaxws.client.ClientProxy.invoke(ClientProxy.java:150) Now, here's the really interesting part. If I change the JDK that my the code above is running under from JDK6 to JDK5, the exception above goes away! It's really strange. The only way I found for the code above to run under JDK6 was to take the JBOSS_HOME/lib/endorsed folder and copy it to JDK6_HOME/lib. This seems like it shouldn't be necessary, but it is. Is there any other way to make this work other than using the workaround I just described?

    Read the article

  • Wordpress content authoring tutorial for non-techie client....

    - by metal-gear-solid
    I made a website for client in wordpress and client will ad ur own content. Client doesn't know how to handle Wordpress and XHTML CSS. but he knows MS word 2007. Client is on remote location.Is there any easy to understand article/video tutorials to give to client on how he can understand wordpress admin and add content/images/video using editor? and how to disable unneeded things for client from wordpress admin ?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >