Search Results

Search found 34110 results on 1365 pages for 'gdata python client'.

Page 623/1365 | < Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >

  • Do you think we will ever settle on a "standard" platform? [closed]

    - by GazTheDestroyer
    The recent explosion of phone platforms has depressed me (slightly), and made me wonder if we will ever reach any kind of standard for presentation? I don't mean language or IDE. Different languages have different strengths and I can see that there may always be a need for disparity, although I do note that languages are merging somewhat in functionality, with traditional imperitive languages like C++ now supporting things like lambdas. What I'm really talking about is a common presentation mechanism. Before smart phones and tablets came along, the web seemed to be finally becoming a reasonable platform for presenting an application that was globally accessible, not just geographically, but by platform too. Sure there are still (sometimes infuriating) implementation differences and quirks, but if you wrote a decent site you knew it could be accessed on anything from a PC to a phone to a C64 running the right software. "Write Once Run Anywhere" seemed to finally be becoming a reality. However, in the last few years we've seen an explosion of mobile operating systems, and the ubiquitous "app". A good site is no longer enough, you need a native "app", and of course we have a sudden massive disparity in OS, language, and APIs needed to write them as each battles for supremecy. It's kind of weird how the cycle of popularity goes. Mainframes with terminals - thin client. PC - thick client. Web browser - thin client. Phone app - thick(ish) client. I just wonder if you think there will ever be a global standard for clients, or whether the "shiny and different" cycle will always continue along with the battle of the tech du jour.

    Read the article

  • Remotely Managing Storage on Hyper-V 2012 Core

    - by Vazgen
    I have a core Hyper-V Server 2012 that I am remotely managing from a Windows 8 client. I can connect in Hyper-V Manager, Server Manager, and MMC. However, I don't understand how I can manage the physical hard drive (for ex, deleting vhdx files, creating folders, etc) from my Windows 8 client. I tried to attach the remote share as follows: q: \\MyServer\c$ It said command completed successfully, but I don't see the drive on my client's Explorer. I can get to it in cmd.exe on the client but how can I manage it in a GUI? explorer q: Throws error:

    Read the article

  • Gathering application architecture

    - by userbb
    Suppose there is system for gathering info about system activities. There is a client part with an interface and there are agent parts that are installed on each machine. I estimate that there could be max 20 computers now. Later could be more like 50. My solutions: Agent stores data into local database e.g. sqlite. There is also a service which can be used by a client to query data. So if a client wants to display data for 50 computers, he sends a query to 50 computers. I'am on that solution now but maybe it's totally wrong. Agent stores data into local database (I don't known good one for that). There is also server (main database) and local databases are synchronized with the server. In this case, a client connects to the main database to display data. Agent sends data in realtime to main database. So same as point 2, but there is no sync. Like in point 3, but agent buffers data in local database and sends it in small chunks to main database. What is the best approach?

    Read the article

  • Solaris 11 VNC Server is "blurry" or "smeared"

    - by user12620111
    I've been annoyed by quality of the image that is displayed by my VNC viewer when I visit a Solaris 11 VNC server. How should I describe the image? Blurry? Grainy? Smeared? Low resolution? Compressed? Badly encoded? This is what I have gotten used to seeing on Solaris 11: This is not a problem for me when I view Solaris 10 VNC servers. I've finally taken the time to investigate, and the solution is simple. On the VNC client, don't allow "Tight" encoding. My VNC Viewer will negotiate to Tight encoding if it is available. When negotiating with the Solaris 10 VNC server, Tight is not a supported option, so the Solaris 10 server and my client will agree on ZRLE.  Now that I have disabled Tight encoding on my VNC client, the Solaris 11 VNC Servers looks much better: How should I describe the display when my VNC client is forced to negotiate to ZRLE encoding with the Solaris 11 VNC Server? Crisp? Clear? Higher resolution? Using a lossless compression algorithm? When I'm on a low bandwidth connection, I may re-enable Tight compression on my laptop. In the mean time, the ZRLE compression is sufficient for a coast-to-coast desktop, through the corporate firewall, encoded with VPN, through my ISP and onto my laptop. YMMV.

    Read the article

  • Ubuntu 12.10 blender problem

    - by SamueLL
    please I have a problem i want to install older version of blender becouse i don't fell very well in new UI. I downloaded blender-2.46-linux-glibc236-py25-x86_64 from official site and i had some problems like missing python-2.5 library but i fixed them and now i have problem that i can solve... Compiled with Python version 2.7.3. Warning: could not set Blender.sys.progname Can you help me with this please?

    Read the article

  • Offline web app options

    - by L. De Leo
    For a game web app that runs Python on the server side and Javascript / HTML on the client side I'd like to build an offline version that runs in Chrome and on the mobile devices. What is the most convenient way currently available to target Chrome, Win 8 Desktop (with a Win packaged app) and the mobile devices reusing most of the code? Options could be PhoneGap for the mobile devices and PyJs for the offline browser versions or maybe translate Python to Dart manually (because of the closer semantics of the two languages) and compile to Javascript.

    Read the article

  • How it was detected if last ACK lost in TCP connection termination procedure?

    - by sonali
    In TCP Connection Termination, when client enters in TIME_WAIT state means the client waits for a period of time equal to double the maximum segment life (MSL) time, to ensure the ACK it sent was received. (I read above from a book computer networking by kurose and also given in following URL http://www.tcpipguide.com/free/t_TCPConnectionTermination-2.htm ) But how it was detected if last ACK(send by client as a response to server FIN) lost?

    Read the article

  • Generating SSH Keys on the Server

    - by mupro
    I have set up sshd on a Linux server and managed to log in via keys generated using ssh-keygen. However, I have made the following observation: When I generate the key pair on the client and copy the public key to the server everythings works fine. But when I generate the key pair on the server and copy the private key to the client I cannot log in. Can anybody explain to me if and why the keys have to be created on the client?

    Read the article

  • Specific issue on data pump API in oracle

    - by Median Hilal
    I have a client/server architecture. Using an Oracle dbms on the database server side. I need to perform a user-triggered (from client side) backup of the database, where the best way to perform that is using a stored procedure on the server side which the client may call, as the client has no oracle tools to perform the backup. I've searched thorough inside available solutions and have found that using a stored procedure is the best way. Well, then I found that using oracle data pump API is the best way to use inside a PL/SQl stored procedure. My specific questions about the API are... I would like to ask about two issues ... ---- The first ----- the detach function to detach the handler, is it necessary to be used at the end of the procedure? and what if I don't use it? I read the Oracle documentation but I didn't get their point, they say it doesn't terminate the job but indicates that the user is not interested in it, an when I use detach at the end of my procedure the exported .dmp file disappears. ---- The second ----- to perform a user (client side) triggered back up as the modification are only to the data, I used TABLE parameter for the export operation. But the version parameter... what should it be? I also read the documentation but couldn't determine what I need (LATEST or COMPATIBLE) ? Thanks

    Read the article

  • Multiple network interfaces and UDP packets distribution

    - by Robert Kubrick
    I have a Linux server with 2 1Gb network interfaces eth1 and eth2. If I start 2 clients listening to the same multicast address and each client connects through a different NIC (say client 1 listens to the multicast through eth1 and client 2 through eth2), then client 2 gets duplicate UDP packets. If both clients use the same interface eth1 on the other hand, both clients work fine. I have already tried to set arp_filter and proxy_arp to 1 (arp flux issue) but it hasn't solved the issue. Is this a Linux kernel problem? Or is there another way to setup the interfaces correctly?

    Read the article

  • SSH with public/private key to iMac fails.

    - by bennedich
    I'm trying to connect to my iMac (server) from my macbook (client) on my LAN. Both have Mac OS X 10.6.4. Server running on a new clean install of the OS. When just activating Remote Login in System Preferences everything works fine. But when setting up ssh to only work with public/private key I get the following error messages from the server log depending on if I use a rsa passphrase or not: With passphrase (case 1): PAM: user account has expired for <myServerUserName> from 192.168.X.X via 192.168.X.Y Without passphrase (case 2): Failed publickey for <myServerUserName> from 192.168.X.X port AAAAA ssh2 This is my setup algorithm: Create a private and public key on client with command ssh-keygen -t rsa. In case 1 I also set a passphrase. Move the id_rsa.pub to the server path /Users/<myServerUserName>/.ssh/ In this folder I execute cat id_rsa.pub > authorized_keys Making sure Remote Login isn't active, I now execute sudo /usr/sbin/sshd -d on the server. Back on the client I now type ssh -v -v -v <myServerUserName>@192.168.X.Y and get prompted to accept RSA key fingerprint. This is NOT the same fingerprint as the one from when I created the private/public key (should it be?). I accept. Depending on case: CASE 1: Client gets halted for password and the response is permission denied even though correct password is given. Back on the server I can read the error message I stated above for case 1: PAM: user account has expired... CASE 2: Client gets message Connection closed by 192.168.X.Y. Back on the server I can read the error message I stated above for case 2: Failed publickey... What could possibly cause this?

    Read the article

  • Excellent C Tutorials

    - by nebffa
    I've looked high and low for C tutorials that have lots of exercises to do along the way, but in my experience all the guides I've found have mostly explanation with a bit of code pre-written, lacking exercises for you to do. I started learning Python using Learn Python the Hard Way, and for almost all other standard languages there are good sites to learn and grapple with the syntax - for example codecademy.com, programr.com. Is there any site like any of the above for C?

    Read the article

  • SEO consideration for duplicate sites

    - by Malk
    I am building a brochure-ware website for a company that sells products all across the world. They need the site to ask the user what region they are in before using the site; there are 5 regions. This is because there are different products offered to different regions and each region may or may not want to customize their own content. However, at launch and likely forever, most of the pages will be the exact same minus what is listed in the footer and in the product selection menu. My question is how should I structure the sitemap for this site for best SEO? Should I be concerned with duplicate content penalties and/or cannibalizing the site's presence on the SERP? Some considerations: The client wants to be able to print links directly to regional specific content bypassing any prompt for the user to select a region (to ensure they land on the target page). The client cannot have a 'default' region so the user must have a region specified "Clean" urls are important, but there is wiggle room The client does not want each region to have its own domain There will be a link on the page to allow users to specify a different region The client is not concerned with localization ...at this time Some products are available in multiple regions A quick list of options I am considering: www.site.com/region/page region.site.com/page www.site.com/page?region (no cookie, pages require the parameter. If visited without; the user must select a region) www.site.com/page (using cookie and a splash screen if needed; could pass parameter in to set the region for direct linking) Thanks in advance for your advice.

    Read the article

  • Routing connections through VPN based on hostname (not IP range)

    - by Michal M
    This bugs me immensly. I need to connect to client's network through VPN. But I definitely do not want to send all the traffic through client's network so this option is out of question. What I need basically is for the OS to know that all client's network subdomains (*.example.com) need to go through the VPN connection. I tried a couple of options: Changing order of services and setting the VPN on top, but this works the same as "Send all traffic over VPN connection". Using "VPN on Demand" option from network advanced options, but this feature is quite rubbish to be honest. Seems to work only in Safari (?!) and it doesn't route the connection, but it basically triggers the OS to connect to the selected VPN. The reason I need it to work based on hostnames rather than IP range is simple - my client has a lot of servers inside his network and it's impossible for me to remember all IPs. They are all within a range, but this doesn't help me remembering. Another option would be to put the VPN connection on the bottom of network services and untick "Send all traffic..." and then put all known hostnames in hosts file, but considering there could be hundreds of servers (therefore hostnames and ips too) it ridiculous job. And if new server appears on the network I'd need to edit the hosts file again. Sisyphean labours. However this works on Windows very simply. If a hostname is not available through default network interface, then it seems to try VPN connection and this works brilliantly. So, how can I achieve that on Mac, then? I know client's internal DNS addresses if that is of any help (like directing a certain domains through a different DNS)? PS. Using latest version 10.6.6. PS2. I am using VPN to access intranet, version control servers (svn://), samba shares and for SSH access to servers.

    Read the article

  • SSL certificates: how to use it?

    - by Rod
    I have a central server and I want to purchase a SSL certificate for it. The architecture is based on this central server and many connected web-servers which are on the client-side (one for each user). A client could access both the main server and its local server. Moreover the two servers exchange data between them. I would like client's web browser to trust all servers and always activating https and a secure connection when connecting to them. Assuming I can name all servers on the same domain name (I was thinking about a wildcard certificate anyway), which kind of certificate or use of it can make these secure connections working? There is the possibility that main server and client side server are not connected for a while. Is possible to activate an https connection for a client to its local server in this case? When I will need to renew or change the certificate, I would like to change it just on the main server avoiding to have the need of touch all the servers on the side of clients. Can I do that in some way?

    Read the article

  • Other processes take over port 80 when restarting Apache - why, and how to solve?

    - by user72149
    I have a CentOS 5.5 server running Apache on port 80 as well as some other applications. All works fine until I for some reason need to restart the httpd process. Doing so returns: sudo /etc/init.d/httpd restart Stopping httpd: [ OK ] Starting httpd: (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs First I thought perhaps httpd had frozen and was still running, but that was not the case. So I ran netstat to find out what was using port 80: sudo netstat -tlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:7203 *:* LISTEN 24012/java tcp 0 0 localhost.localdomain:smux *:* LISTEN 3547/snmpd tcp 0 0 *:mysql *:* LISTEN 21966/mysqld tcp 0 0 *:ssh *:* LISTEN 3562/sshd tcp 0 0 *:http *:* LISTEN 3780/python26 Turns out that my python process had taken over listening to http in the instant that httpd was restarting. So, I killed python and tried starting httpd again - but ran into the same error. Netstat again: sudo netstat -tlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:7203 *:* LISTEN 24012/java tcp 0 0 localhost.localdomain:smux *:* LISTEN 3547/snmpd tcp 0 0 *:mysql *:* LISTEN 21966/mysqld tcp 0 0 *:ssh *:* LISTEN 3562/sshd tcp 0 0 *:http *:* LISTEN 24012/java Now my java process had taken over listening to http. I killed that too and could then successfully restart httpd. But this is a terrible workaround. Why will these python and java processes start listening to port 80 as soon as httpd is restarted? How to solve? Two other comments. 1) Both java and python processes are started by apache from a php script. But when apache is restarted, they should not be affected. And 2) I have the same setup on two other machines running Ubuntu and there's no problem there. Any ideas? Edit: The Java process listens to port 7203 and the python process supposedly doesn't listen to any port. For some reason, they start listening to port 80 when apache is restarted. This hasn't happened before. On Ubuntu it runs fine. For some reason, on my current CentOS 5.5 machine, this problem arises.

    Read the article

  • vmware server 64 bit on ubuntu 9.10 64 bit with P2V windows 2003 SBS poor network speed

    - by RobertHC
    configuration is ubuntu 2.6.31-21 64 bit vmware 2.0.2 64 bit last release hardware is core 2 quad with 8GB ram guest is win 2003 server SBS 32 bit Dear friends, we have a converted physical to virtual windows sbs 2003, converted with last converter available nowadays http://www.vmware.com/products/converter/ vCenter converter. Running the P2V 2K3 SBS on vmware server, it does boot fine, but we do note an abnormal CPU activity and a poor lan speed. As attempts we did what follow. We removed all unneeded peripherals, we removed one NIC (phisycal server was 2 nics), we changed the vmx to ged the nic recognized as intel instead than amd, we removed 1 cpu (physical was 2 cpu), we removed anything was reported as failed driver from system events monitor. Nothing to do, no way and funny results. Let's read some tests results. All are made with the same file copied in different source folders. Copying from client side (both directions copy, to/from server) results are i.e. 10 seconds, copying the same files from server side (again from and to server) results are different... from client to server, speed is round about (bit more) 10 seconds, but from server to client direction is slower: double the time. Beeing very fast and launching a simultaneous copy "from server to client"+"from client to server", this made from the server side, results in a stuck traffic... 45 seconds to do the copy. vmware tools are installed and e1000 driver has been updated. With one processor CPU activity is still going up and down but much less than with two. Because of test, we installed win 2k8 STD 64 bit. We repeated all the above tests with exactly the same file result is just one: always 5 seconds (this matches the lan speed) Any idea about this issue is welcome and thank you if any. Kind regards R.

    Read the article

  • How to configure multiple WCF binding configurations for the same scheme

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this in my client configuration: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint contract="Server.IService1" binding="netTcpBinding" address="net.tcp://localhost:8081/Service1.svc"/> <endpoint contract="Server.IService2" binding="netTcpBinding" address="net.tcp://localhost:8081/Service2.svc"/> </client> The server configuration is this: <bindings> <netTcpBinding> <binding portSharingEnabled="true"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <services> <service name="Service1"> <endpoint contract="Server.IService1, Library" binding="netTcpBinding" address=""/> </service> <service name="Service2"> <endpoint contract="Server.IService2, Library" binding="netTcpBinding" address=""/> </service> </services> <serviceHostingEnvironment> <serviceActivations> <add relativeAddress="Service1.svc" service="Server.Service1"/> <add relativeAddress="Service2.svc" service="Server.Service2"/> </serviceActivations> </serviceHostingEnvironment> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • WCF GZip Compression Request/Response Processing

    - by IanT8
    How do I get a WCF client to process server responses which have been GZipped or Deflated by IIS? On IIS, I've followed the instructions here on how to make IIS 6 gzip all responses (where the request contained "Accept-Encoding: gzip, deflate") emitted by .svc wcf services. On the client, I've followed the instructions here and here on how to inject this header into the web request: "Accept-Encoding: gzip, deflate". Fiddler2 shows the response is binary and not plain old Xml. The client crashes with an exception which basically says there's no Xml header, which ofcourse is true. In my IClientMessageInspector, the app crashes before AfterReceiveReply is called. Some further notes: (1) I can't change the WCF service or client as they are supplied by a 3rd party. I can however attach behaviors and/or message inspectors via configuration if this is the right direction to take. (2) I don't want to compress/uncompress just the soap body, but the entire message. Any ideas/solutions? * SOLVED * It was not possible to write a WCF extension to achieve these goals. Instead I followed this CodeProject article which advocate a helper class: public class CompressibleHttpRequestCreator : IWebRequestCreate { public CompressibleHttpRequestCreator() { } WebRequest IWebRequestCreate.Create(Uri uri) { HttpWebRequest httpWebRequest = Activator.CreateInstance(typeof(HttpWebRequest), BindingFlags.CreateInstance | BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance, null, new object[] { uri, null }, null) as HttpWebRequest; if (httpWebRequest == null) { return null; } httpWebRequest.AutomaticDecompression =DecompressionMethods.GZip | DecompressionMethods.Deflate; return httpWebRequest; } } and also, an addition to the application configuration file: <configuration> <system.net> <webRequestModules> <remove prefix="http:"/> <add prefix="http:" type="Pajocomo.Net.CompressibleHttpRequestCreator, Pajocomo" /> </webRequestModules> </system.net> </configuration> What seems to be happening is that WCF eventually asks some factory or other deep down in system.net to provide an HttpWebRequest instance, and we provide the helper that will be asked to create the required instance. In the WCF client configuration file, a simple basicHttpBinding is all that is required, without the need for any custom extensions. When the application runs, the client Http request contains the header "Accept-Encoding: gzip, deflate", the server returns a gzipped web response, and the client transparently decompresses the http response before handing it over to WCF. When I tried to apply this technique to Web Services I found that it did NOT work. Although the helper class was executed in the same was as when used by the WCF client, the http request did not contain the "Accept-Encoding: ..." header. To make this work for Web Services, I had to edit the Web Proxy class, and add this method: protected override System.Net.WebRequest GetWebRequest(Uri uri) { System.Net.HttpWebRequest rq = (System.Net.HttpWebRequest)base.GetWebRequest(uri); rq.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate; return rq; } Note that it did not matter whether the CompressibleHttpRequestCreator and block from the application config file were present or not. For web services, only overriding GetWebRequest in the Web Service Proxy worked.

    Read the article

  • Configuring multiple WCF binding configurations for the same scheme doesn't work

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this in my client configuration: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint contract="Server.IService1" binding="netTcpBinding" address="net.tcp://localhost:8081/Service1.svc"/> <endpoint contract="Server.IService2" binding="netTcpBinding" bindingConfiguration="public" address="net.tcp://localhost:8081/Service2.svc"/> </client> The server configuration is this: <bindings> <netTcpBinding> <binding portSharingEnabled="true"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <services> <service name="Service1"> <endpoint contract="Server.IService1, Library" binding="netTcpBinding" address=""/> </service> <service name="Service2"> <endpoint contract="Server.IService2, Library" binding="netTcpBinding" bindingConfiguration="public" address=""/> </service> </services> <serviceHostingEnvironment> <serviceActivations> <add relativeAddress="Service1.svc" service="Server.Service1"/> <add relativeAddress="Service2.svc" service="Server.Service2"/> </serviceActivations> </serviceHostingEnvironment> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • How to configurie multiple distinct WCF binding configurations for the same scheme

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this in my client configuration: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint contract="Server.IService1" binding="netTcpBinding" address="net.tcp://localhost:8081/Service1.svc"/> <endpoint contract="Server.IService2" binding="netTcpBinding" address="net.tcp://localhost:8081/Service2.svc"/> </client> The server configuration is this: <bindings> <netTcpBinding> <binding portSharingEnabled="true"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <services> <service name="Service1"> <endpoint contract="Server.IService1, Library" binding="netTcpBinding" address=""/> </service> <service name="Service2"> <endpoint contract="Server.IService2, Library" binding="netTcpBinding" address=""/> </service> </services> <serviceHostingEnvironment> <serviceActivations> <add relativeAddress="Service1.svc" service="Server.Service1"/> <add relativeAddress="Service2.svc" service="Server.Service2"/> </serviceActivations> </serviceHostingEnvironment> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • WCF MustUnderstand headers are not understood

    - by raghur
    Hello everyone, I am using a Java Web Service which is developed by one of our vendor which I really do not have any control over it. I have written a WCF router which the client application calls it and the router sends the message to the Java Web Service and returns the data back to the client. The issue what I am encountering is, I am successfully able to call the Java web service from the WCF router, but, I am getting the following exceptions back. Router config file is as follows: <customBinding> <binding name="SimpleWSPortBinding"> <!--<reliableSession maxPendingChannels="4" maxRetryCount="8" ordered="true" />--> <!--<mtomMessageEncoding messageVersion ="Soap12WSAddressing10" ></mtomMessageEncoding>--> <textMessageEncoding maxReadPoolSize="64" maxWritePoolSize="16" messageVersion="Soap12WSAddressing10" writeEncoding="utf-8" /> <httpTransport manualAddressing="false" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" allowCookies="false" authenticationScheme="Anonymous" bypassProxyOnLocal="true" keepAliveEnabled="true" maxBufferSize="65536" transferMode="Buffered" unsafeConnectionNtlmAuthentication="false"/> </binding> </customBinding> Test client config file <customBinding> <binding name="DocumentRepository_Binding_Soap12"> <!--<reliableSession maxPendingChannels="4" maxRetryCount="8" ordered="true" />--> <!--<mtomMessageEncoding messageVersion ="Soap12WSAddressing10" ></mtomMessageEncoding>--> <textMessageEncoding maxReadPoolSize="64" maxWritePoolSize="16" messageVersion="Soap12WSAddressing10" writeEncoding="utf-8"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> </textMessageEncoding> <httpTransport manualAddressing="false" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" allowCookies="false" authenticationScheme="Anonymous" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" keepAliveEnabled="true" maxBufferSize="65536" proxyAuthenticationScheme="Anonymous" realm="" transferMode="Buffered" unsafeConnectionNtlmAuthentication="false" useDefaultWebProxy="true" /> </binding> </customBinding> If I use the textMessageEncoding I am getting <soap:Text xml:lang="en">MustUnderstand headers: [{http://www.w3.org/2005/08/addressing}To, {http://www.w3.org/2005/08/addressing}Action] are not understood.</soap:Text> If I use mtomMessageEncoding I am getting The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error. My Router class is as follows: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple, AddressFilterMode = AddressFilterMode.Any, ValidateMustUnderstand = false)] public class EmployeeService : IEmployeeService { public System.ServiceModel.Channels.Message ProcessMessage(System.ServiceModel.Channels.Message requestMessage) { ChannelFactory<IEmployeeService> factory = new ChannelFactory<IEmployeeService>("client"); factory.Endpoint.Behaviors.Add(new MustUnderstandBehavior(false)); IEmployeeService proxy = factory.CreateChannel(); Message responseMessage = proxy.ProcessMessage(requestMessage); return responseMessage; } } The "client" in the above code under ChannelFactory is defined in the config file as: <client> <endpoint address="http://JavaWS/EmployeeService" binding="wsHttpBinding" bindingConfiguration="wsHttp" contract="EmployeeService.IEmployeeService" name="client" behaviorConfiguration="clientBehavior"> <headers> </headers> </endpoint> </client> Really appreciate your kind help. Thanks in advance, Raghu

    Read the article

  • Duplex Contract GetCallbackChannel always returns a null-instance

    - by Yaroslav
    Hi! Here is the server code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.ServiceModel; using System.Runtime.Serialization; using System.ServiceModel.Description; namespace Console_Chat { [ServiceContract(SessionMode = SessionMode.Required, CallbackContract = typeof(IMyCallbackContract))] public interface IMyService { [OperationContract(IsOneWay = true)] void NewMessageToServer(string msg); [OperationContract(IsOneWay = false)] bool ServerIsResponsible(); } [ServiceContract] public interface IMyCallbackContract { [OperationContract(IsOneWay = true)] void NewMessageToClient(string msg); [OperationContract(IsOneWay = true)] void ClientIsResponsible(); } [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession)] public class MyService : IMyService { public IMyCallbackContract callback = null; /* { get { return OperationContext.Current.GetCallbackChannel<IMyCallbackContract>(); } } */ public MyService() { callback = OperationContext.Current.GetCallbackChannel<IMyCallbackContract>(); } public void NewMessageToServer(string msg) { Console.WriteLine(msg); } public void NewMessageToClient( string msg) { callback.NewMessageToClient(msg); } public bool ServerIsResponsible() { return true; } } class Server { static void Main(string[] args) { String msg = "none"; ServiceMetadataBehavior behavior = new ServiceMetadataBehavior(); ServiceHost serviceHost = new ServiceHost( typeof(MyService), new Uri("http://localhost:8080/")); serviceHost.Description.Behaviors.Add(behavior); serviceHost.AddServiceEndpoint( typeof(IMetadataExchange), MetadataExchangeBindings.CreateMexHttpBinding(), "mex"); serviceHost.AddServiceEndpoint( typeof(IMyService), new WSDualHttpBinding(), "ServiceEndpoint" ); serviceHost.Open(); Console.WriteLine("Server is up and running"); MyService server = new MyService(); server.NewMessageToClient("Hey client!"); /* do { msg = Console.ReadLine(); // callback.NewMessageToClient(msg); } while (msg != "ex"); */ Console.ReadLine(); } } } Here is the client's: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.ServiceModel; using System.Runtime.Serialization; using System.ServiceModel.Description; using Console_Chat_Client.MyHTTPServiceReference; namespace Console_Chat_Client { [ServiceContract(SessionMode = SessionMode.Required, CallbackContract = typeof(IMyCallbackContract))] public interface IMyService { [OperationContract(IsOneWay = true)] void NewMessageToServer(string msg); [OperationContract(IsOneWay = false)] bool ServerIsResponsible(); } [ServiceContract] public interface IMyCallbackContract { [OperationContract(IsOneWay = true)] void NewMessageToClient(string msg); [OperationContract(IsOneWay = true)] void ClientIsResponsible(); } public class MyCallback : Console_Chat_Client.MyHTTPServiceReference.IMyServiceCallback { static InstanceContext ctx = new InstanceContext(new MyCallback()); static MyServiceClient client = new MyServiceClient(ctx); public void NewMessageToClient(string msg) { Console.WriteLine(msg); } public void ClientIsResponsible() { } class Client { static void Main(string[] args) { String msg = "none"; client.NewMessageToServer(String.Format("Hello server!")); do { msg = Console.ReadLine(); if (msg != "ex") client.NewMessageToServer(msg); else client.NewMessageToServer(String.Format("Client terminated")); } while (msg != "ex"); } } } } callback = OperationContext.Current.GetCallbackChannel(); This line constanly throws a NullReferenceException, what's the problem? Thanks!

    Read the article

  • Java deadlock problem....

    - by markovuksanovic
    I am using java sockets for communication. On the client side I have some processing and at this point I send an object to the cient. The code is as follows: while (true) { try { Socket server = new Socket("localhost", 3000); OutputStream os = server.getOutputStream(); InputStream is = server.getInputStream(); CommMessage commMessage = new CommMessageImpl(); ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(bos); oos.writeObject(commMessage); os.write(bos.toByteArray()); os.flush(); byte[] buff = new byte[512]; int bytesRead = 0; ByteArrayOutputStream receivedObject = new ByteArrayOutputStream(); while ((bytesRead = is.read(buff)) > -1) { receivedObject.write(buff, 0, bytesRead); System.out.println(receivedObject); } os.close(); Thread.sleep(10000); } catch (IOException e) { } catch (InterruptedException e) { } } Next on the server side I have the following code to read the object and write the response (Which is just an echo message) public void startServer() { Socket client = null; try { server = new ServerSocket(3000); logger.log(Level.INFO, "Waiting for connections."); client = server.accept(); logger.log(Level.INFO, "Accepted a connection from: " + client.getInetAddress()); os = new ObjectOutputStream(client.getOutputStream()); is = new ObjectInputStream(client.getInputStream()); // Read contents of the stream and store it into a byte array. byte[] buff = new byte[512]; int bytesRead = 0; ByteArrayOutputStream receivedObject = new ByteArrayOutputStream(); while ((bytesRead = is.read(buff)) > -1) { receivedObject.write(buff, 0, bytesRead); } // Check if received stream is CommMessage or not contents. CommMessage commMessage = getCommMessage(receivedObject); if (commMessage != null) { commMessage.setSessionState(this.sessionManager.getState().getState()); ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(bos); oos.writeObject(commMessage); os.write(bos.toByteArray()); System.out.println(commMessage.getCommMessageType()); } else { processData(receivedObject, this.sessionManager); } os.flush(); } catch (IOException e) { } finally { try { is.close(); os.close(); client.close(); server.close(); } catch (IOException e) { } } } The above code works ok if I do not try to read data on the client side (If i exclude the code related to reading). But if I have that code, for some reason, I get some kind of deadlock when accessing input streams. Any ideas what I might have done wrong? Thanks in advance.

    Read the article

  • Sending and receiving a TMemoryStream using IdTCPClient and IdTCPServer

    - by Martin Melka
    I found Remy Lebeau's chat demo of IdTCP components in XE2 and I wanted to play with it a little bit. (It can be found here) I would like to send a picture using these components and the best approach seems to be using TMemoryStream. If I send strings, the connection works fine, the strings are transmitted successfully, however when I change it to Stream instead, it doesn't work. Here is the code: Server procedure TMainForm.IdTCPServerExecute(AContext: TIdContext); var rcvdMsg: string; ms:TMemoryStream; begin // This commented code is working, it receives and sends strings. // rcvdMsg:=AContext.Connection.IOHandler.ReadLn; // LogMessage('<ServerExec> '+rcvdMsg); // // TResponseSync.SendResponse(AContext, rcvdMsg); try ms:=TMemoryStream.Create; AContext.Connection.IOHandler.ReadStream(ms); ms.SaveToFile('c:\networked.bmp'); except LogMessage('Failed to receive',clred); end; end; Client procedure TfrmMain.Button1Click(Sender: TObject); var ms: TMemoryStream; bmp: TBitmap; pic: TPicture; s: string; begin // Again, this code is working for sending strings. // s:=edMsg.Text; // Client.IOHandler.WriteLn(s); ms:=TMemoryStream.Create; pic:=TPicture.Create; pic.LoadFromFile('c:\Back.png'); bmp:=TBitmap.Create; bmp.Width:=pic.Width; bmp.Height:=pic.Height; bmp.Canvas.Draw(0,0,pic.Graphic); bmp.SaveToStream(ms); ms.Position:=0; Client.IOHandler.Write(ms); ms.Free; end; When I try to send the stream from the client, nothing observable happens (breakpoint in the OnExecute doesn't fire). However, when closing the programs(after sending the MemoryStream), two things happen: If the Client is closed first, only then does the except part get processed (the log displays the 'Failed to receive' error. However, even if I place a breakpoint on the first line of the try-except block, it somehow gets skipped and only the error is displayed). If the Server is closed first, the IDE doesn't change back from debug, Client doesn't change its state to disconnected (as it normally does when server disconnects) and after the Client is closed as well, an Access Violation error from the Server app appears. I guess this means that there is a thread of the Server still running and maintaining the connection. But no matter how much time i give it, it never completes the task of receiving the MemoryStream. Note: The server uses IdSchedulerOfThreadDefault and IdAntiFreeze, if that matters. As I can't find any reliable source of help for the revamped Indy 10 (it all appears to apply for the older Indy 10, or even Indy 9), I hope you can tell me what is wrong. Thanks - ANSWER - SERVER procedure TMainForm.IdTCPServerExecute(AContext: TIdContext); var size: integer; ms:TMemoryStream; begin try ms:=TMemoryStream.Create; size:=AContext.Connection.IOHandler.ReadLongInt; AContext.Connection.IOHandler.ReadStream(ms, size); ms.SaveToFile('c:\networked.bmp'); except LogMessage('Failed to receive',clred); end; end; CLIENT procedure TfrmMain.Button1Click(Sender: TObject); var ms: TMemoryStream; bmp: TBitmap; pic: TPicture; begin ms:=TMemoryStream.Create; pic:=TPicture.Create; pic.LoadFromFile('c:\Back.png'); bmp:=TBitmap.Create; bmp.Width:=pic.Width; bmp.Height:=pic.Height; bmp.Canvas.Draw(0,0,pic.Graphic); bmp.SaveToStream(ms); ms.Position:=0; Client.IOHandler.Write(ms, 0, True); ms.Free; end;

    Read the article

< Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >