Search Results

Search found 30471 results on 1219 pages for 'client side scripting'.

Page 260/1219 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • Multiplayer Network Game - Interpolation and Frame Rate

    - by J.C.
    Consider the following scenario: Let's say, for sake of example and simplicity, that you have an authoritative game server that sends state to its clients every 45ms. The clients are interpolating state with an interpolation delay of 100 ms. Finally, the clients are rendering a new frame every 15ms. When state is updated on the client, the client time is set from the incoming state update. Each time a frame renders, we take the render time (client time - interpolation delay) and identify a previous and target state to interpolate from. To calculate the interpolation amount/factor, we take the difference of the render time and previous state time and divide by the difference of the target state and previous state times: var factor = ((renderTime - previousStateTime) / (targetStateTime - previousStateTime)) Problem: In the example above, we are effectively displaying the same interpolated state for 3 frames before we collected the next server update and a new client (render) time is set. The rendering is mostly smooth, but there is a dash of jaggedness to it. Question: Given the example above, I'd like to think that the interpolation amount/factor should increase with each frame render to smooth out the movement. Should this be considered and, if so, what is the best way to achieve this given the information from above?

    Read the article

  • OpenLDAP with StartTLS broken on Debian Lennny

    - by mr.zog
    I'm trying to get OpenLDAP on Lenny to work with StartTLS. I have a Fedora 13 machine which I'm using as a client for testing. So far the Fedora client is ignoring the 'host' directive in /etc/ldap.conf when I try to connect using ldapsearch. The client wants to connect to 127.0.0.1:389 even if I specify -H ldaps://server.name on when using ldapsearch. /etc/ldap.conf on the client machine is in mode 444. But even when I try connecting locally from an ssh session, I see errors like this: ldap_sasl_interactive_bind_s: Can't contact LDAP server (-1) Someone hit me with a cluebat, plz. Update: you must use ~/.ldaprc for settings such as 'host'.

    Read the article

  • Firewalling a Cisco ASA Split tunnel

    - by dunxd
    I have a Cisco ASA 5510 at head office, and Cisco ASA 5505 in remote offices. The remote offices are connected over a split tunnelled VPN - the ASA 5505s use "Easy VPN" Client type VPN in Network Extension Mode (NEM). I'd like to set firewall rules for the non-tunnelled traffic only. Traffic over the VPN to head office should not have any firewall rules applied. I might want to apply different firewall rules to different remote offices. All the documentation I have been able to find assumes the Client VPN is a software endpoint, and all the configuration is done at the 5510. When using a Cisco 5505 as the VPN client, is it possible to configure any firewalling at the Client end, or does it all have to come from the 5510? Are there any other issues to look out for when split-tunnelling a VPN by this method?

    Read the article

  • Using Remote Desktop, connect to a Windows 7 domain user account without first logging on locally?

    - by calavera
    I have a dell laptop (henceforth we'll call this the server) running Windows 7 Enterprise. The server is part of my company's domain. My primary user account is a domain account. When I am at home and not connected to the domain, I prefer to connect to the server using Remote Desktop Connection from my MacBook Pro (we'll call this the client). The problem is, that if I do not physically login to the server, I am unable to connect to it using RDC from the client. I have a local administrator account on the server, and connecting to it via RDC works just fine. I had a feeling that the Mac RDC application was not giving me the full story, so I attempted the same procedure from a Windows 7 client. When trying to login, I get this message: So basically, If I logon to the server physically with my domain user and lock the computer, I can then successfully logon from the client. Otherwise, I am unable to connect.

    Read the article

  • How to run specific program with root privileges (Ubuntu OS) when no sudo user log into system?

    - by makulia
    How to run specific program with root privileges (Ubuntu OS) when no sudo user log into system? Program need root privileges to function correctly. Normal user shouldn't be able to shutdown this process. For example, I have to users. Admin and Client. Program should start only when Client log into system. It needs root privileges and Client shouldn't be able to shut this process down.

    Read the article

  • stunnel crashing

    - by Jay
    I'm trying to use stunnel to secure a legacy application's communications. I can't seem to get it setup and working. Can anyone provide any hints where I'm going wrong? Here's what I'm trying to accomplish: A windows service on a client machine connects to a server on port 7000 using TCP. I'd like to encrypt the communication between client and server. Here's what I've tried: Created a new server that accepts ssl connections on port 7443. Got a certificate for the server and installed it. That seems to work with my test setup. Installed stunnel on my windows machine (version 7.43 from the distribution archive file). Installed libssl32.dll and libeay32.dll in the same directory as stunnel.exe ( from the openssl-0.9.8h-1 binary distribution). Installed it as a service using "stunnel -install" Configured stunnel as follows: debug=7 output=C:\p4\internal\Utility\Proxy\proxy.log service=Proxy taskbar=no [exchange] accept=7000 client=yes connect=proxy.blah.com:7443 I changed my hosts file to trick the old application into connecting through stunnel: server.blah.com 127.0.0.1 # when client looks up server it goes to stunnel proxy.blah.com IP-address-of-server.blah.com # stunnel connects to new server "server.blah.com" now resolves to the machine it's running on (i.e. stunnel). "proxy.blah.com" goes to the real server. stunnel should connect to the server. I start the stunnel service and try to connect. It looks like it's working but the stunnel service just shuts down with no message. 2010.04.19 13:16:21 LOG5[4924:3716]: stunnel 4.33 on x86-pc-mingw32-gnu with OpenSSL 0.9.8h 28 May 2008 2010.04.19 13:16:21 LOG5[4924:3716]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.04.19 13:16:49 LOG5[4924:3748]: Service exchange accepted connection from 127.0.0.1:4134 2010.04.19 13:16:49 LOG6[4924:3748]: connect_blocking: connecting x.80.60.32:7443 2010.04.19 13:16:49 LOG5[4924:3748]: connect_blocking: connected x.80.60.32:7443 2010.04.19 13:16:49 LOG5[4924:3748]: Service exchange connected remote server from x.253.120.19:4135 2010.04.19 13:20:24 LOG5[3668:3856]: Reading configuration from file stunnel.conf 2010.04.19 13:20:24 LOG7[3668:3856]: Snagged 64 random bytes from C:/.rnd 2010.04.19 13:20:24 LOG7[3668:3856]: Wrote 1024 new random bytes to C:/.rnd 2010.04.19 13:20:24 LOG7[3668:3856]: RAND_status claims sufficient entropy for the PRNG 2010.04.19 13:20:24 LOG7[3668:3856]: PRNG seeded successfully 2010.04.19 13:20:24 LOG7[3668:3856]: SSL context initialized for service exchange 2010.04.19 13:20:24 LOG5[3668:3856]: Configuration successful 2010.04.19 13:20:24 LOG5[3668:3856]: No limit detected for the number of clients 2010.04.19 13:20:24 LOG7[3668:3856]: FD=312 in non-blocking mode 2010.04.19 13:20:24 LOG7[3668:3856]: Option SO_REUSEADDR set on accept socket 2010.04.19 13:20:24 LOG7[3668:3856]: Service exchange bound to 0.0.0.0:7000 2010.04.19 13:20:24 LOG7[3668:3856]: Service exchange opened FD=312 2010.04.19 13:20:24 LOG5[3668:3856]: stunnel 4.33 on x86-pc-mingw32-gnu with OpenSSL 0.9.8h 28 May 2008 2010.04.19 13:20:24 LOG5[3668:3856]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.04.19 13:21:02 LOG7[3668:4556]: Service exchange accepted FD=372 from 127.0.0.1:4156 2010.04.19 13:21:02 LOG7[3668:4556]: Creating a new thread 2010.04.19 13:21:02 LOG7[3668:4556]: New thread created 2010.04.19 13:21:02 LOG7[3668:3756]: Service exchange started 2010.04.19 13:21:02 LOG7[3668:3756]: FD=372 in non-blocking mode 2010.04.19 13:21:02 LOG5[3668:3756]: Service exchange accepted connection from 127.0.0.1:4156 2010.04.19 13:21:02 LOG7[3668:3756]: FD=396 in non-blocking mode 2010.04.19 13:21:02 LOG6[3668:3756]: connect_blocking: connecting x.80.60.32:7443 2010.04.19 13:21:02 LOG7[3668:3756]: connect_blocking: s_poll_wait x.80.60.32:7443: waiting 10 seconds 2010.04.19 13:21:02 LOG5[3668:3756]: connect_blocking: connected x.80.60.32:7443 2010.04.19 13:21:02 LOG5[3668:3756]: Service exchange connected remote server from x.253.120.19:4157 2010.04.19 13:21:02 LOG7[3668:3756]: Remote FD=396 initialized 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): before/connect initialization 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write client hello A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server hello A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server certificate A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server done A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write client key exchange A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write change cipher spec A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write finished A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 flush data 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read finished A The client thinks the connection is closed: No connection could be made because the target machine actively refused it 127.0.0.1:7000 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.Sockets.Socket.Connect(EndPoint remoteEP) at Service.ConnUtility.Connect() Any suggestions?

    Read the article

  • ClearOS transparent firewall and web proxy mode possible?

    - by Scott Szretter
    In other firewall software packages I have seen the ability and am hoping it is possible with ClearOS to put it in to transparent or bridged mode where the device sits physically between the client workstations and the internet router, but is the client workstation has the internet router address as it's gateway, not the clearos machine. The client workstation would also not have to enter an http proxy address as that would be transparent. Is this possible and how do you configure ClearOS this way? Thanks!

    Read the article

  • "Directory index forbidden by Options directive" when deleting or renaming folders through webdav

    - by sandwiches
    I am trying to delete folders through webdav but all I get is 403 on the client and "Directory index forbidden by Options directive" in the Apache error log. I enabled "options indexes" for the folder and I stopped getting the errors in either the client or the log, but I still can't rename or delete folders through webdav. Any ideas why I'm unable to edit folders through webdav? I am running WAMP, default installation with Apache 2.2.17. I can connect, create files, delete files, rename them, etc. I can create folders but not delete them or rename them, once they're created. On the access log, whenever I try to delete, I get this: "DELETE /uploads/shahs HTTP/1.1" 301 243 On the error log, I get: Directory index forbidden by Options directive: The Webdav client gives a 403 when trying to delete or rename folders. Once, I added "options indexes," I stopped getting the error message in the Apache error log and the 403 on the webdav client, but now, deleting or renaming does nothing. No error messages, but nothing happens, at all.

    Read the article

  • Pragmas and exceptions

    - by Darryl Gove
    The compiler pragmas: #pragma no_side_effect(routinename) #pragma does_not_write_global_data(routinename) #pragma does_not_read_global_data(routinename) are used to tell the compiler more about the routine being called, and enable it to do a better job of optimising around the routine. If a routine does not read global data, then global data does not need to be stored to memory before the call to the routine. If the routine does not write global data, then global data does not need to be reloaded after the call. The no side effect directive indicates that the routine does no I/O, does not read or write global data, and the result only depends on the input. However, these pragmas should not be used on routines that throw exceptions. The following example indicates the problem: #include <iostream extern "C" { int exceptional(int); #pragma no_side_effect(exceptional) } int exceptional(int a) { if (a==7) { throw 7; } else { return a+1; } } int a; int c=0; class myclass { public: int routine(); }; int myclass::routine() { for(a=0; a<1000; a++) { c=exceptional(c); } return 0; } int main() { myclass f; try { f.routine(); } catch(...) { std::cout << "Something happened" << a << c << std::endl; } } The routine "exceptional" is declared as having no side effects, however it can throw an exception. The no side effects directive enables the compiler to avoid storing global data back to memory, and retrieving it after the function call, so the loop containing the call to exceptional is quite tight: $ CC -O -S test.cpp ... .L77000061: /* 0x0014 38 */ call exceptional ! params = %o0 ! Result = %o0 /* 0x0018 36 */ add %i1,1,%i1 /* 0x001c */ cmp %i1,999 /* 0x0020 */ ble,pt %icc,.L77000061 /* 0x0024 */ nop However, when the program is run the result is incorrect: $ CC -O t.cpp $ ./a.out Something happend00 If the code had worked correctly, the output would have been "Something happened77" - the exception occurs on the seventh iteration. Yet, the current code produces a message that uses the original values for the variables 'a' and 'c'. The problem is that the exception handler reads global data, and due to the no side effects directive the compiler has not updated the global data before the function call. So these pragmas should not be used on routines that have the potential to throw exceptions.

    Read the article

  • "Brute force attempt" on sending multiple emails

    - by bretddog
    While testing to send multiple emails, I successfully sent about 100 emails (with a 20KB pdf attachment), to the same email-address (my own), and they were all received. But on next attempt, my cPanel account was blocked, due to a "brute force attempt". Are there any special precautions I need to take when sending bulk emails? I simply looped through below code without pause for each email. What type of alert could that give on the email server, and how should I avoid it? client = New SmtpClient(smtp, Convert.ToInt32(port)) AddHandler client.SendCompleted, AddressOf OnAsyncSendComplete client.Credentials = New System.Net.NetworkCredential(usn, psw) client.SendAsync(mail, token) Should I wait for SendComplete event for each email before sending the next?

    Read the article

  • How to attach files to an email in Windows Phone 7.5 Mango

    - by Vaibhav Garg
    In the default email client in Windows Phone 7.5 Mango, how can arbitrary files(.zip, .mp3, .txt, .pdf etc) be attached. As the storage is sand-boxed, the file handler can implement hooks to the email client, as MS-Office does and Adobe Reader doesn't, but the email client can not access files in the Phone's storage. Is there a way, or a work around? I my usage pattern, I tend to send a lot of pdfs, and am unable to do that!

    Read the article

  • Is there a better way to consume an ASP.NET Web API call in an MVC controller?

    - by davidisawesome
    In a new project I am creating for my work I am creating a fairly large ASP.NET Web API. The api will be in a separate visual studio solution that also contains all of my business logic and database interactions, Model classes as well. In the test application I am creating (which is asp.net mvc4), I want to be able to hit an api url I defined from the control and cast the return JSON to a Model class. The reason behind this is that I want to take advantage of strongly typing my views to a Model. This is all still in a proof of concept stage, so I have not done any performance testing on it, but I am curious if what I am doing is a good practice, or if I am crazy for even going down this route. Here is the code on the client controller: public class HomeController : Controller { protected string dashboardUrlBase = "http://localhost/webapi/api/StudentDashboard/"; public ActionResult Index() //This view is strongly typed against User { //testing against Joe Bob string adSAMName = "jBob"; WebClient client = new WebClient(); string url = dashboardUrlBase + "GetUserRecord?userName=" + adSAMName; //'User' is a Model class that I have defined. User result = JsonConvert.DeserializeObject<User>(client.DownloadString(url)); return View(result); } . . . } If I choose to go this route another thing to note is I am loading several partial views in this page (as I will also do in subsequent pages). The partial views are loaded via an $.ajax call that hits this controller and does basically the same thing as the code above: Instantiate a new WebClient Define the Url to hit Deserialize the result and cast it to a Model Class. So it is possible (and likely) I could be performing the same actions 4-5 times for a single page. Is there a better method to do this that will: Let me keep strongly typed views. Do my work on the server rather than on the client (this is just a preference since I can write C# faster than I can write javascript).

    Read the article

  • VPN messes up DNS resolution

    - by user124114
    After connecting with the Kerio VPN client (OS X Leopard) to a server, the internet (~web browsing) stopped working for the client. After poking around, the issue seems to be bad DNS server (i.e., entering IPs directly works). After disconnecting from the VPN, the invalid DNS server disappears from scutil --dns and all's well again. Now, I don't understand why OS X on the client even changes the DNS settings -- internet should be routed through a different interface, through the default gateway, not through the VPN. Questions: By what mechanism does connecting the VPN client change the "default" DNS server? How can I stop the VPN client from changing routing/DNS rules? Where is this stuff stored/modified? Before VPN: $ scutil --dns DNS configuration resolver #1 nameserver[0] : 10.66.77.1 # <---- default gateway = home router; all good order : 200000 resolver #2 domain : local options : mdns timeout : 2 order : 300000 ... VPN connected: $ scutil --dns DNS configuration resolver #1 nameserver[0] : 192.168.1.1 # <--- rubbish nameserver[1] : 192.168.2.1 order : 200000 resolver #2 domain : local options : mdns timeout : 2 order : 300000 ... The VPN doesn't appear among $ networksetup -listallnetworkservices.

    Read the article

  • kill sessions for other machines

    - by LipKee
    I've an admin and client site. Multiple users will view at client site at the same time. It is possible that I can force the users logout of my client site from admin site? I'm now using classic ASP and the In Proc session is used. Is there a way where I can kill all the sessions of the users and force them to logout?

    Read the article

  • lots of dns requests from China, should I worry?

    - by nn4l
    I have turned on dns query logs, and when running "tail -f /var/log/syslog" I see that I get hundreds of identical requests from a single ip address: Apr 7 12:36:13 server17 named[26294]: client 121.12.173.191#10856: query: mydomain.de IN ANY + Apr 7 12:36:13 server17 named[26294]: client 121.12.173.191#44334: query: mydomain.de IN ANY + Apr 7 12:36:13 server17 named[26294]: client 121.12.173.191#15268: query: mydomain.de IN ANY + Apr 7 12:36:13 server17 named[26294]: client 121.12.173.191#59597: query: mydomain.de IN ANY + The frequency is about 5 - 10 requests per second, going on for about a minute. After that the same effect repeats from a different IP address. I have now logged about 10000 requests from about 25 ip addresses within just a couple of hours, all of them come from China according to "whois [ipaddr]". What is going on here? Is my name server under attack? Can I do something about this?

    Read the article

  • Methodology To Determine Cause Of User Specific Error

    - by user3163629
    We have software that for certain clients fails to download a file. The software is developed in Python and compiled into an Windows Executable. The cause of the error is still unknown but we have established that the client has an active internet connection. We suspect that the cause is due to the clients network setup. This error cannot be replicated in house. What technique or methodology should be applied to this kind of specific error that cannot be replicated in house. The end goal is to determine the cause of this error so we can move onto the solution. For example; Remote Debugging: Produce a debug version of the software and ask the client to send back a debug output file. This involves alot of time (back and forth communication) and requires the client to work and act in a timely manor to be successful. In-house debugging: Visit the client and determine their network setup, etc. Possibly develop a series of script tests before hand to run on the clients computer under the same network. Other methodologies and techniques I am not aware of?

    Read the article

  • clients auto-lock feature for inactvity timeout not working

    - by Swaminathan Shanmugam
    In our sbs 2003 domain environment, the clients' pc's inactive for default period will be locked out automatically and only Ctrl+ Alt + Del & client password combination will unlock the client's pcs. Recently around 9 months before, all our client's pc's joined the new sbs 2011 but (usually all are locking with Win+L key combination manually)the auto lock feature is not working from the beginning onwards. Now only I am brought up with this issue by clients. Please help me set that option!

    Read the article

  • Customizing ASP.NET MVC 2 - Metadata and Validation

    In this article, you will learn about two major extensibility points of ASP.NET MVC 2, the ModelMetadataProvider and the ModelValidatorProvider. These two APIs control how templates are rendered, as well as server-side & client side validation of your model objects.

    Read the article

  • Defensive Error Handling

    TRY…CATCH error handling in SQL Server has certain limitations and inconsistencies that will trap the unwary developer, used to the more feature-rich error handling of client-side languages such as C# and Java. In this article, abstracted from his excellent new book, Defensive Database Programming with SQL Server, Alex Kuznetsov offers a simple, robust approach to checking and handling errors in SQL Server, with client-side error handling used to enforce what is done on the server.

    Read the article

  • Block SMTP session with sender domain which doesn't itself accept SMTP connection.

    - by bignose
    I'm administrating a mail service for a small business. Their mail host's internet connection is an ADSL service with a permanent IP address. Unfortunately, many misconfigured mail systems will happily deliver to this host, but, when the host attempts to send mail back (e.g. a bounce notice, or a normal response from someone), the declared sender's domain has an MX which refuses to receive connections from this host. That misconfiguration makes their system a one-way mail sender, which is a problem. How can I configure Postfix on this customer's mail host to refuse SMTP sessions that declare a sender domain which itself refuses SMTP from this host? That is, if the SMTP client declares a domain that we can't make SMTP connections back to, then there's not much point accepting the incoming connection in the first place. Note that I'm not, as some commenters have assumed, talking about checking whether the SMTP client will receive messages. The check I want is whether the declared sender's domain (regardless of who the current SMTP client is) will accept SMTP connections from here. In other words: when we get around to sending a message back, we'll need the sender's domain to accept SMTP connections; I want to do that check before accepting the incoming session. I'm imagining a late check (after the low-cost checks to winnow most of the rubbish connections) that keeps the client on the other end while it attempts an SMTP client connection back to the declared domain of the sender. If that connection is rejected, the incoming one is also rejected. I'm also open to other suggestions for how this problem might be addressed (short of not using this mail host at all, which isn't an option).

    Read the article

  • How to organize deployment process in Chef-controlled environment?

    - by Alex
    I have a web Linux-based infrastructure which consists of 15 virtual machines and over 50 various services. It is fully controlled by Chef. Most of the services are developed internally. Basically the current deployment process is triggered by a shell script. A build system (a mix of Python and shell scripts) packages the services as .deb files and puts these packages into a repo. It runs apt-get update on all 15 nodes then because the standard Chef apt cookbook only runs apt-get once per day and we definitely do not want to run apt-get update unconditionally on each chef-client wake. The build system restarts chef-client daemons on all 15 nodes finally (we need this step because of pull Chef nature). The current process has a number of drawbacks we want to address. First off, it is asynchronous because the deployment script does not check chef-client logs after restart so we don't even know if the deployment was successful. It does not even wait for Chef clients to complete the cycle. Second, we definitely do not want to force chef-client restarts on all nodes because we usually deploy only a small number of packages. And third, I am not quite sure using chef-client for deployment is legitimate, probably we are just doing it wrong from the start. Please share your thoughts/experience.

    Read the article

  • Postgres pgpass windows - not working

    - by Scott
    DB: Postgres 9.0 Client: Windows 7 Server Windows 2008, 64bit I'm trying to connect remotely to a postgres instance for purposes of performing a pg_dump to my local machine. Everything works from my client machine, except that I need to provide a password at the password prompt, and I'd ultimately like to batch this with a script. I've followed the instructions here: http://www.postgresql.org/docs/current/static/libpq-pgpass.html but it's not working. To recap, I've created a file on the client (and tried the server as well): C:/Users/postgres/AppData/postgresql/pgpass.conf, where postgresql is the db user. The file has one line with the following data: *:5432:*postgres:[mypassword] (also tried explicit ip/dbname values, all asterisks, and every combination in between. (I've also tried replacing each '*' with [localhost|myip] and [mydatabasename] respectively. From my client machine, I connect using: pg_dump -h [myip] -U postgres -w [mydbname] [mylocaldumpfile] I'm presuming that I need to provide the '-w' switch in order to ignore password prompt, at which point it should look in the AppData directory on the server. It just comes back with "connection to database failed: fe_sendauth: no password supplied. Any insights are appreciated. As a hack workaround, if there was a way I could tell the windows batch file on my client machine to inject the password at the postgres prompt, that would work as well. Thanks.

    Read the article

  • Making Multilingual J! 1.5 + + Joomfish + VM 1.17 more workable

    - by rhand
    I have been working with a multilingual Joomla! 1.5.23 e-commerce website for a client for quite a while and made several customizations. But the client is still not happy he has to adjust content at at least three locations: Joomfish Virtuemart Article Manager Joomfish is nice in the way that it allows you to create multilingual content and copy and paste the source language on the same page, which makes translation work easier but it is annoying in the way you have to edit several custom fields at different locations/ content types. As Joomla! source language content still needs to be created in the article manager first this is the second location the client has to work at. The third location is Virtuemart. Here all the products and product categories are created. And here we added some custom fields as well. Now I was considering upgrading the website to Joomla 1.7 or later on to 1.8. This J! versions have better multilingual support. But I wonder if er can really make the client's life easier. We will still have to copy the source language to a new article and create content in another language. We will still have the issue of content in custom fields that needs to be translated and we will still have to create content. Should I go for another CMS such as Magento or do you think there is a way in a more recent Joomla! version to work with all content in one or max two locations?

    Read the article

  • Synergy 1.3.6 on Windows 7 Test not working

    - by Joel Avery
    So I have the client running, says perfectly connected to server when I run a test I run the same test on the server and it just says "running test. press stop to end test" and no output log file comes up like on the client. Then I stop the test it says synergy is about to quot with errors and warnings. Please check the log then click OK. Except for the log didnt come up and I don't know how to access the log from command prompt. And I'm pretty sure my thing is set up correct: section: links Joel-HP: left = JoelPC2 JoelPC2: right = Joel-HP end My Windows 7 64 bit server is on the right, Windows 7 32 bit client on left Both client and server are run as admin.

    Read the article

  • Which architecture should I choose for this project?

    - by Jichao
    I have a project. The server which based on a phone PCI-board is responsible for received phone calls from the customer and then redirect the phone calls to the operators. I have decided to code the server using c++ programming language and qt framework because the PCI-board SDK's interface is c/c++ originated and for the sake of portability. The server need to send the information of the the customer to the operator while ringing the operator and the ui interface of the operator client should be browser-based. Now the key problem is how could the server notify the operator that there is a phone call for he/she. One architecture I have considered is like this, The operator browser client use ajax pooling the web server to check whether there is call to the client; the web server pooling the database server to check whether there is call; the desktop server(c++) wait for the phone calls and set the information in the database. The other operations such as hang up the phone call from the client, retransfer the phone call to the other operator also use this architecture. Then, is there any way other than pooling the server(js code setInterval('getDail', 1000)) to decide whether there is a call to the operator? Is this architecture feasible or should I use some terrific techniques that I do know such as web services,xml-rpc, soap???

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >