Search Results

Search found 7669 results on 307 pages for 'dealing with clients'.

Page 268/307 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • Different EF Property DataType than Storage Layer Possible?

    - by dj_kyron
    Hi, I am putting together a WCF Data Service for PatientEntities using Entity Framework. My solution needs to address these requirements: Property DateOfBirth of entity Patient is stored in SQL Server as string. It would be ideal if the entity class did not also use the "string" type but rather a DateTime type. (I would expect this to be possible since we're abstracting away from the storage layer). Where could a conversion mechanism be put in place that would convert to and from DateTime/string so that the entity and SQL Server are in sync?. I cannot change the storage layer's structure, so I have to work around it. WCF Data Services (Read-only, so no need for saving changes) need to be used since clients will be able to use LINQ expressions to consume the service. They can generate results based on any given query scenario they need and not be constrained by a single method such as GetPatient(int ID). I've tried to use DTOs, but run into problem of mapping the ObjectContext to a DTO, I don't think that is theoretically possible...or too complicated if it is. I've tried to use Self Tracking Entities but they require the metadata from the .edmx file if I'm correct, and this isn't allowing a different property data type. I also want to add customizations to my Entity getter methods so that a property "MRN" of type "string" needs to have .Replace("MR~", string.Empty) performed before it is returned. I can add this to the getter methods but the problem with that is Entity Framework will overwrite that next time it refreshes the entity classes. Is there a permanent place I can put these? Should I use POCO instead? How would that work with WCF Data Services? Where would the service grab the metadata?

    Read the article

  • How does the socket API accept() function work?

    - by Eli Bendersky
    The socket API is the de-facto standard for TCP/IP and UDP/IP communications (that is, networking code as we know it). However, one of its core functions, accept() is a bit magical. To borrow a semi-formal definition: accept() is used on the server side. It accepts a received incoming attempt to create a new TCP connection from the remote client, and creates a new socket associated with the socket address pair of this connection. In other words, accept returns a new socket through which the server can communicate with the newly connected client. The old socket (on which accept was called) stays open, on the same port, listening for new connections. How does accept work? How is it implemented? There's a lot of confusion on this topic. Many people claim accept opens a new port and you communicate with the client through it. But this obviously isn't true, as no new port is opened. You actually can communicate through the same port with different clients, but how? When several threads call recv on the same port, how does the data know where to go? I guess it's something along the lines of the client's address being associated with a socket descriptor, and whenever data comes through recv it's routed to the correct socket, but I'm not sure. It'd be great to get a thorough explanation of the inner-workings of this mechanism.

    Read the article

  • Password hashing, salt and storage of hashed values

    - by Jonathan Leffler
    Suppose you were at liberty to decide how hashed passwords were to be stored in a DBMS. Are there obvious weaknesses in a scheme like this one? To create the hash value stored in the DBMS, take: A value that is unique to the DBMS server instance as part of the salt, And the username as a second part of the salt, And create the concatenation of the salt with the actual password, And hash the whole string using the SHA-256 algorithm, And store the result in the DBMS. This would mean that anyone wanting to come up with a collision should have to do the work separately for each user name and each DBMS server instance separately. I'd plan to keep the actual hash mechanism somewhat flexible to allow for the use of the new NIST standard hash algorithm (SHA-3) that is still being worked on. The 'value that is unique to the DBMS server instance' need not be secret - though it wouldn't be divulged casually. The intention is to ensure that if someone uses the same password in different DBMS server instances, the recorded hashes would be different. Likewise, the user name would not be secret - just the password proper. Would there be any advantage to having the password first and the user name and 'unique value' second, or any other permutation of the three sources of data? Or what about interleaving the strings? Do I need to add (and record) a random salt value (per password) as well as the information above? (Advantage: the user can re-use a password and still, probably, get a different hash recorded in the database. Disadvantage: the salt has to be recorded. I suspect the advantage considerably outweighs the disadvantage.) There are quite a lot of related SO questions - this list is unlikely to be comprehensive: Encrypting/Hashing plain text passwords in database Secure hash and salt for PHP passwords The necessity of hiding the salt for a hash Clients-side MD5 hash with time salt Simple password encryption Salt generation and Open Source software I think that the answers to these questions support my algorithm (though if you simply use a random salt, then the 'unique value per server' and username components are less important).

    Read the article

  • Handles Comparison: empty classes vs. undefined classes vs. void*

    - by Nawaz
    Microsoft's GDI+ defines many empty classes to be treated as handles internally. For example, (source GdiPlusGpStubs.h) //Approach 1 class GpGraphics {}; class GpBrush {}; class GpTexture : public GpBrush {}; class GpSolidFill : public GpBrush {}; class GpLineGradient : public GpBrush {}; class GpPathGradient : public GpBrush {}; class GpHatch : public GpBrush {}; class GpPen {}; class GpCustomLineCap {}; There are other two ways to define handles. They're, //Approach 2 class BOOK; //no need to define it! typedef BOOK *PBOOK; typedef PBOOK HBOOK; //handle to be used internally //Approach 3 typedef void* PVOID; typedef PVOID HBOOK; //handle to be used internally I just want to know the advantages and disadvantages of each of these approaches. One advantage with Microsoft's approach is that, they can define type-safe hierarchy of handles using empty classes, which (I think) is not possible with the other two approaches. What else? EDIT: One advantage with the second approach (i.e using incomplete classes) is that we can prevent clients from dereferencing the handles (that means, this approach appears to support encapsulation strongly, I suppose). The code would not even compile if one attempts to dereference handles. What else?

    Read the article

  • JBoss Clustered Service that sends emails from txt file

    - by michael lucas
    I need a little push in the right direction. Here's my problem: I have to create an ultra-reliable service that sends email messages to clients whose addresses are stored in txt file on FTP server. Single txt file may contain unlimited number of entries. Most often the file contains about 300,000 entries. Service exposes interface with just two simple methods: TaskHandle sendEmails(String ftpFilePath); ProcessStatus checkProcessStatus(TaskHandle taskHandle); Method sendEmails() returns TaskHandle by which we can ask for ProcessStatus. For such a service to be reliable clustering is necessary. Processing single txt file might take a long time. Restarting one node in a cluster should have no impact on sending emails. We use JBoss AS 4.2.0 which comes with a nice HASingletonController that ensure one instance of service is running at given time. But once a fail-over happens, the second service should continue work from where the first one stopped. How can I share state between nodes in a cluster in such a way that leaves no possibility of sending some emails twice?

    Read the article

  • querying huge database table takes too much of time in mysql

    - by Vijay
    Hi all, I am running sql queries on a mysql db table that has 110Mn+ unique records for whole day. Problem: Whenever I run any query with "where" clause it takes at least 30-40 mins. Since I want to generate most of data on the next day, I need access to whole db table. Could you please guide me to optimize / restructure the deployment model? Site description: mysql Ver 14.12 Distrib 5.0.24, for pc-linux-gnu (i686) using readline 5.0 4 GB RAM, Dual Core dual CPU 3GHz RHEL 3 my.cnf contents : [root@reports root]# cat /etc/my.cnf [mysqld] datadir=/data/mysql/data/ socket=/tmp/mysql.sock sort_buffer_size = 2000000 table_cache = 1024 key_buffer = 128M myisam_sort_buffer_size = 64M # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 [mysql.server] user=mysql basedir=/data/mysql/data/ [mysqld_safe] err-log=/data/mysql/data/mysqld.log pid-file=/data/mysql/data/mysqld.pid [root@reports root]# DB table details: CREATE TABLE `RAW_LOG_20100504` ( `DT` date default NULL, `GATEWAY` varchar(15) default NULL, `USER` bigint(12) default NULL, `CACHE` varchar(12) default NULL, `TIMESTAMP` varchar(30) default NULL, `URL` varchar(60) default NULL, `VERSION` varchar(6) default NULL, `PROTOCOL` varchar(6) default NULL, `WEB_STATUS` int(5) default NULL, `BYTES_RETURNED` int(10) default NULL, `RTT` int(5) default NULL, `UA` varchar(100) default NULL, `REQ_SIZE` int(6) default NULL, `CONTENT_TYPE` varchar(50) default NULL, `CUST_TYPE` int(1) default NULL, `DEL_STATUS_DEVICE` int(1) default NULL, `IP` varchar(16) default NULL, `CP_FLAG` int(1) default NULL, `USER_LOCATE` bigint(15) default NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=200000000; Thanks in advance! Regards,

    Read the article

  • Concurrent WCF calls via shared channel

    - by Kent Boogaart
    I have a web tier that forwards calls onto an application tier. The web tier uses a shared, cached channel to do so. The application tier services in question are stateless and have concurrency enabled. But they are not being called concurrently. If I alter the web tier to create a new channel on every call, then I do get concurrent calls onto the application tier. But I wanted to avoid that cost since it is functionally unnecessary for my scenario. I have no session state, and nor do I need to re-authenticate the caller each time. I understand that the creation of the channel factory is far more expensive than than the creation of the channels, but it is still a cost I'd like to avoid if possible. I found this article on MSDN that states: While channels and clients created by the channels are thread-safe, they might not support writing more than one message to the wire concurrently. If you are sending large messages, particularly if streaming, the send operation might block waiting for another send to complete. Firstly, I'm not sending large messages (just a lot of small ones since I'm doing load testing) but am still seeing the blocking behavior. Secondly, this is rather open-ended and unhelpful documentation. It says they "might not" support writing more than one message but doesn't explain the scenarios under which they would support concurrent messages. Can anyone shed some light on this?

    Read the article

  • Is it impossible to secure .net code (intellectual property) ?

    - by JL
    I used to work in JavaScript a lot and one thing that really bothered my employers was that the source code was too easy to steal. Even with obfuscation, nothing really helped, because we all knew that any competent developer would be able to read that code if they wanted to. JS Scripts are one thing, but what about SOA projects that have millions invested in IP (Intellectual Property). I love .net, and especially C#, but I recently again had to answer the question "If we give this compiled program over to our clients, can their developers reverse engineer it?" I had gone out of my way to obfuscate the code, but I knew it wouldn't take that much for another determined C# developer to get at the code. So I earnestly pose the question, is it impossible to secure .net code? The considerations I have as as follows: Even regular native executables can be reversed, but not every developer has the skill to be able to do this. Its a lot harder to disassemble a native executable than a .net assembly. Obfuscation will only get you so far, but it does help a little. Why have I never seen any public acknowledgement by Microsoft that anything written in .net is subject to relatively easy IP theft? Why have I never seen a scrap of counter measure training on any Microsoft site? Why does VS come with a community obfuscater as an optional component? Ok maybe I have just had my head in the sand here, but its not exactly high on most developers priority list. Are there any plans to address my concerns in any future version of .net? I'm not knocking .net, but I would like some realistic answers, thank you, question marked as subjective and community!

    Read the article

  • What's a way for a client to automatically resolve the ip address of a server?

    - by zooropa
    The project I am working on is a client/server architecture. In a LAN environment, I want the client's to be able to automatically determine the server's address. I want to avoid having to manually configure each client with the ip address of the server. What is the best way to do this? Some alternatives I have thought about doing are: The server could listen for broadcast packets from the clients. The message from the client would be a request for the IP address of the server. The server would respond with its address. The machine running my project's server could also have a bind server running. The LAN's router could be configured to use it as one of its DNS servers. I think I saw that there is a bind library. Does that mean I can build the bind service into my server so that bind doesn't have to be installed on the server? Any other ideas? What have you done in the past? What are the pros/cons of these approaches and others that might be suggested? Thanks for your help!

    Read the article

  • Passing Derived Class Instances as void* to Generic Callbacks in C++

    - by Matthew Iselin
    This is a bit of an involved problem, so I'll do the best I can to explain what's going on. If I miss something, please tell me so I can clarify. We have a callback system where on one side a module or application provides a "Service" and clients can perform actions with this Service (A very rudimentary IPC, basically). For future reference let's say we have some definitions like so: typedef int (*callback)(void*); // This is NOT in our code, but makes explaining easier. installCallback(string serviceName, callback cb); // Really handled by a proper management system sendMessage(string serviceName, void* arg); // arg = value to pass to callback This works fine for basic types such as structs or builtins. We have an MI structure a bit like this: Device <- Disk <- MyDiskProvider class Disk : public virtual Device class MyDiskProvider : public Disk The provider may be anything from a hardware driver to a bit of glue that handles disk images. The point is that classes inherit Disk. We have a "service" which is to be notified of all new Disks in the system, and this is where things unravel: void diskHandler(void *p) { Disk *pDisk = reinterpret_cast<Disk*>(p); // Uh oh! // Remainder is not important } SomeDiskProvider::initialise() { // Probe hardware, whatever... // Tell the disk system we're here! sendMessage("disk-handler", reinterpret_cast<void*>(this)); // Uh oh! } The problem is, SomeDiskProvider inherits Disk, but the callback handler can't receive that type (as the callback function pointer must be generic). Could RTTI and templates help here? Any suggestions would be greatly appreciated.

    Read the article

  • C++ std::stringstream seemingly causes thread to hang or die under SunOS

    - by stretch
    I have an application developed under Linux with GCC 4.2 which makes quite heacy use of stringstreams to wrap and unwrap data being sent over the wire. (Because the Grid API I'm using demands it). Under Linux everything is fine but when I deploy to SunOS (v5.10 running SPARC) and compile with GCC 3.4.6 the app hangs when it reaches the point at which stringstreams are used. In more detail: The main thread accepts requests from clients and starts a new pthread to handle each request. The child thread uses stringstreams to pack data. When the child thread gets to that point it seems to hang for a second and then die. The main thread is unaffected. Are there any known issues with stringstream and GCC 3.4.6 or SunOS or SPARCs? I didn't find anything yet... Can anyone suggest a better way to pack and unpack large amounts of data a strings or byte streams? Apologies for not posting code but this to me seems more involved than a simple syntax error. All the same, the thread crashes: std::stringstream mystringstream; //not here mystringstream << "some data: "; //but here That is, I can declare the stringstream but when I try to use it something goes wrong.

    Read the article

  • [WEB] Local/Dev/Live deployment - best workflow

    - by Adam Kiss
    Hello, situation We our little company with 3 people, each has a localhost webserver and most projects (previous and current) are on one PC network shared disk. We have virtual server, where some of our clients' sites and our site. Our standard workflow is: Coder PC ? Programmer localhost ? dev domain (client.company.com) ? live version (client.com) It often happens, that there are two or three guys working on same projects at the same time - one is on dev version, two are on localhost. When finished, we try to synchronize the files on dev version and ideally not to mess (thanks ILMV:]) up any files, which *knock knock * doesn't happen often. And then one of us deploys dev version on live webserver. question we are looking for a way to simplify this workflow while updating websites - ideally some sort of diff uploader or VCS probably (Git/SVN/VCS/...), but we are not completely sure where to begin or what way would be ideal, therefore I ask you, fellow stackoverflowers for your experience with website / application deployment and recommended workflow. We probably will also need to use Mac in process, so if it won't be a problem, that would be even better. Thank you

    Read the article

  • TransactionScope question - how can I keep the DTC from getting involved in this?

    - by larryq
    (I know the circumstances surrounding the DTC and promoting a transaction can be a bit mysterious to those of us not in the know, but let me show you how my company is doing things, and if you can tell me why the DTC is getting involved, and if possible, what I can do to avoid it, I'd be grateful.) I have code running on an ASP.Net webserver. We have one database, SQL 2008. Our data access code looks something like this-- We have a data access layer that uses a wrapper object for SQLConnections and SQLCommands. Typical use looks like this: void method1() { objDataObject = new DataAccessLayer(); objDataObject.Connection = SomeConnectionMethod(); SqlCommand objCommand = DataAccessUtils.CreateCommand(SomeStoredProc); //create some SqlParameters, add them to the objCommand, etc.... objDataObject.BeginTransaction(IsolationLevel.ReadCommitted); objDataObject.ExecuteNonQuery(objCommand); objDataObject.CommitTransaction(); objDataObject.CloseConnection(); } So indeed, a very thin wrapper around SqlClient, SqlConnection etc. I want to run several stored procs in a transaction, and the wrapper class doesn't allow me access to the SqlTransaction so I can't pass it from one component to the next. This led me to use a TransactionScope: using (TransactionScope tx1 = new TransactionScope(TransactionScope.RequiresNew)) { method1(); method2(); method3(); tx1.Complete(); } When I do this, the DTC gets involved, and unfortunately our webservers don't have "allow remote clients" enabled in the MSDTC settings-- getting IT to allow that will be a fight. I'd love to avoid DTC becoming involved but can I do it? Can I leave out the transactional calls in methods1-3() and just let the TransactionScope figure it all out?

    Read the article

  • Bluetooth in Java Mobile: Handling connections that go out of range

    - by Albus Dumbledore
    I am trying to implement a server-client connection over the spp. After initializing the server, I start a thread that first listens for clients and then receives data from them. It looks like that: public final void run() { while (alive) { try { /* * Await client connection */ System.out.println("Awaiting client connection..."); client = server.acceptAndOpen(); /* * Start receiving data */ int read; byte[] buffer = new byte[128]; DataInputStream receive = client.openDataInputStream(); try { while ((read = receive.read(buffer)) > 0) { System.out.println("[Recieved]: " + new String(buffer, 0, read)); if (!alive) { return; } } } finally { System.out.println("Closing connection..."); receive.close(); } } catch (IOException e){ e.printStackTrace(); } } } It's working fine for I am able to receive messages. What's troubling me is how would the thread eventually die when a device goes out of range? Firstly, the call to receive.read(buffer) blocks so that the thread waits until it receives any data. If the device goes out of range, it would never proceed onward to check if meanwhile it has been interrupted. Secondly, it would never close the connection, i.e. the server would not accept the device once it goes back in range. Thanks! Any ideas would be highly appreciated! Merry Christmas!

    Read the article

  • Dynamically compose ASP MVC view based on context

    - by robertFall
    I have several views that are composed of several partial views based on context. For example, we have a Project view that shows all the details of a project, which includes the scalar values, name etc, and also all the assigned employees, tasks and/or clients. The problem I have is that certain types of project have all of the above sections while others have only two or even on section, ie. details only. What is the best way to compose the Projects master view? I don't want to have logic to check the project in the view. Is there a way to compose a view in code by programatically rendering the relevant partials and ignoring the rest? Otherwise are there any other ideas how to do this in a maintainable way? I could of course just render the partials using if statements to check if they apply, but that way the view contains VERY important logic. In another situation we want to use this method to display content based on the type of subscription a user has. Thanks!

    Read the article

  • Thread-safe data structure design

    - by Inso Reiges
    Hello, I have to design a data structure that is to be used in a multi-threaded environment. The basic API is simple: insert element, remove element, retrieve element, check that element exists. The structure's implementation uses implicit locking to guarantee the atomicity of a single API call. After i implemented this it became apparent, that what i really need is atomicity across several API calls. For example if a caller needs to check the existence of an element before trying to insert it he can't do that atomically even if each single API call is atomic: if(!data_structure.exists(element)) { data_structure.insert(element); } The example is somewhat awkward, but the basic point is that we can't trust the result of "exists" call anymore after we return from atomic context (the generated assembly clearly shows a minor chance of context switch between the two calls). What i currently have in mind to solve this is exposing the lock through the data structure's public API. This way clients will have to explicitly lock things, but at least they won't have to create their own locks. Is there a better commonly-known solution to these kinds of problems? And as long as we're at it, can you advise some good literature on thread-safe design? EDIT: I have a better example. Suppose that element retrieval returns either a reference or a pointer to the stored element and not it's copy. How can a caller be protected to safely use this pointer\reference after the call returns? If you think that not returning copies is a problem, then think about deep copies, i.e. objects that should also copy another objects they point to internally. Thank you.

    Read the article

  • How to save timers with connection to OrderId?

    - by Adrian Serafin
    Hi! I have a system where clients can make orders. After making order they have 60 minutes to pay fot it before it will be deleted. On the server side when order is made i create timer and set elapsed time to 60 minutes System.Timer.Timers timer = new System.Timers.Timer(1000*60*60); timer.AutoReset = false; timer.Elapsed += HandleElapsed; timer.Start(); Because I want to be able to dispose timer if client decides to pay and I want to be able to cancel order if he doesn't I keep two Dictionaries: Dictionary<int, Timer> _orderTimer; Dictionary<Timer, int> _timerOrder; Then when client pay's I can access Timer by orderId with O(1) thanks to _orderTimer dictionary and when time elapsed I can access order with O(1) thanks to _timerOrder dictionary. My question is: Is it good aproach? Assuming that max number of rows I have to keep in dictionary in one moment will be 50000? Maybe it would be better to derive from Timer class, add property called OrderId, keep it all in List and search for order/timer using linq? Or maybe you I should do this in different way?

    Read the article

  • How should I name a native DLL distributed in both 32-bit and 64-bit form?

    - by Spike0xff
    I have a commercial product that's a DLL (native 32-bit code), and now it's time to build a 64-bit version of it. So when installing on 64-bit Windows, the 32-bit version goes into Windows\SysWOW64, and the 64-bit version goes into... Windows\System32! (I'm biting my tongue here...) Or the DLL(s) can be installed alongside the client application. What should I name the 64-bit DLL? Same name as 32-bit: Two files that do the same thing, have the same name, but are totally non-interchangeable. Isn't that a recipe for confusion and support problems? Different names (e.g. product.dll and product64.dll): Now client applications have to know whether they are running 32-bit or 64-bit in order to reference my DLL, and there are languages where that isn't known until run-time - .NET being just one example. And now all the statically compiled clients have to conditionalize the import declarations: IF target=WIN64 THEN import Blah from "product64.dll" ELSE import Blah from "product.dll" ENDIF The product contains massive amounts of C code, and a large chunk of C++ - porting it to C# is not an option. Advice? Suggestions?

    Read the article

  • Twitter api - no more than 150 requests per hour....

    - by RenegadeAndy
    Hi. I am writing a twitter app using jtwitter - and its running inside a server inside my work. Anyway - whenever i run it from work it returns the error below and I am only making a couple requests per hour: HTTP/1.1 400 Bad Request {"request":"/1/statuses/user_timeline.json?count=6&id=cicsdemo&","error":"Rate limit exceeded. Clients may not make more than 150 requests per hour."} ] 2010-06-03 18:44:49 zero.timer.TimerTask::run Thread-3 SEVERE [ CWPZA3100E: Exception during processing for timer task, "twitterTimer". Exception: java.lang.ClassCastException: winterwell.jtwitter.Twitter$Status incompatible with java.lang.String ] I run the same code from home - its fine. So obviously at some point twitter thinks our work is all coming from one direct IP - which is why its hitting a limit which it shouldnt. Do I have any choice or workaround - can i make the limit be counted from my direct machine IP - or to my account instead of IP? Can i use a proxy? Has any body else had this problem and solved it?! Before anyone asks the APP must live inside my work - it cannot run anywhere else! Cheers, Andy

    Read the article

  • Firefox extension: How to inject javascript into page and run it?

    - by el griz
    I'm writing a Firefox extension to allow users to annotate any page with text and/or drawings and then save an image of the page including the annotations. Use cases would be clients reviewing web pages, adding feedback to the page, saving the image of this and emailing it back to the web developer or testers taking annotated screenshots of GUI bugs etc. I wrote the annotation/drawing functionality in javascript before developing the extension. This script adds a <canvas> element to the page to draw upon as well as a toolbar (in a <div>) that contains buttons (each <canvas> elements) for the different draw tools e.g. line, box, ellipse, text, etc. This works fine when manually included in a page. I now need a way for the extension to: Inject this script into any page including pages I don't control. This needs to occur when the user invokes the extension, which can be after the page has loaded. Once injected the init() function in this script that adds the canvas and toolbar elements etc. needs to be run somehow, but I can't determine how to call this from the extension. Note that once injected I don't need this script to interact with the extension (as the extension just takes a screenshot of the entire document (and removes the added page elements) when the user presses the save button in the extension chrome).

    Read the article

  • Overlapping 2 Flash objects and controlling z-index

    - by Magnus Smith
    I have two Flash objects on a webpage (call them A and B), and they overlap so one partially obscures the other. I don't seem to have any control over the z-index, to force B in front of A. Whatever I try, A always 'wins' and stays on the top! I have read many people's posts about problem with getting HTML to show over the top of Flash...but nothing about when your two overlapping items are both Flash objects. I have tried various combinations of wmode=opaque/transparent/window I have tried CSS position:absolute/relative and z-index:0/999 I have tried placing the HTML sections in a different order The problem is the same in IE6 and Firefox 2.0 I do not want to use jQuery in this case In my particular situation B must have position:absolute and wmode=transparent, and sit above A. A needs relative positioning and transparency is not required. However, I have been testing without these restrictions, and I still have no control over the overlap. Are some SWFs (ours are adverts sent by clients) created in such a way as to override any code control of z-index? Thanks for any advice you can give.

    Read the article

  • How to format the node_redis info function output?

    - by hh54188
    I want check the Redis info on my pc with node, so I use node_redis and run the info function: var redis = require("redis"), client = redis.createClient(); client.on("connect", function () { client.info(function (err, replay) { console.log(replay); }) }) but the response is un-format: `#Server\r\nredis_version:2.6.16\r\nredis_git_sha1:00000000\r\nredis_git_dirty:0\r\nredis_mode:standalone\r\nos:Linux 3.8.0-29-generic x86_64\r\narch_bits:64\r\nmultiplexing_api:epoll\r\ngcc_version:4.6.3\r\nprocess_id:2941\r\nrun_id:e60f261a6f4f6f081563a47961315eff6b1c005d\r\ntcp_port:6379\r\nuptime_in_seconds:1777\r\nuptime_in_days:0\r\nhz:10\r\nlru_clock:2040689\r\n\r\n# Clients\r\nconnected_clients:2\r\nclient_longest_output_list:0\r\nclient_biggest_input_buf:0\r\nblocked_clients:0\r\n\r\n# Memory\r\nused_memory:562584\r\nused_memory_human:549.40K\r\nused_memory_rss:2031616\r\nused_memory_peak:561784\r\nused_memory_peak_human:548.62K\r\nused_memory_lua:31744\r\nmem_fragmentation_ratio:3.61\r\nmem_allocator:jemalloc-3.2.0\r\n\r\n# Persistence\r\nloading:0\r\nrdb_changes_since_last_save:0\r\nrdb_bgsave_in_progress:0\r\nrdb_last_save_time:1383553917\r\nrdb_last_bgsave_status:ok\r\nrdb_last_bgsave_time_sec:-1\r\nrdb_current_bgsave_time_sec:-1\r\naof_enabled:0\r\naof_rewrite_in_progress:0\r\naof_rewrite_scheduled:0\r\naof_last_rewrite_time_sec:-1\r\naof_current_rewrite_time_sec:-1\r\naof_last_bgrewrite_status:ok\r\n\r\n# Stats\r\ntotal_connections_received:3\r\ntotal_commands_processed:5\r\ninstantaneous_ops_per_sec:0\r\nrejected_connections:0\r\nexpired_keys:0\r\nevicted_keys:0\r\nkeyspace_hits:0\r\nkeyspace_misses:0\r\npubsub_channels:0\r\npubsub_patterns:0\r\nlatest_fork_usec:0\r\n\r\n# Replication\r\nrole:master\r\nconnected_slaves:0\r\n\r\n# CPU\r\nused_cpu_sys:0.13\r\nused_cpu_user:0.19\r\nused_cpu_sys_children:0.00\r\nused_cpu_user_children:0.00\r\n\r\n# Keyspace\r\n' How can I turn it to an object? like: { redis_version:2.6.16, redis_git_sha1:00000000, redis_git_dirty:0, ...... } so that I can read each property's value, get information I need

    Read the article

  • Possible to capture all events in a web browser?

    - by David
    I am working on a pet project and am at the research stage. Quick summary I am trying to intercept all form submits, onclick, and every single keydown. My library of choice is either jquery, or jquery + prototypejs. I figure I can batch up the events into a queue/stack and send it back to the server in time interval batches to keep performance relatively stable. Concerns Form submits and change's would be relatively simple.. Something like $("form :inputs").bind("change", function() { ... record event... }); But how to ensure I get precedence over the applications handlers as I have a habit of putting return false on a lot of my form handlers when there is a validation issue. As I understand it, that effectively stops the event in its tracks. My project For my smaller remote clients I will put their products onto a VPS or run it in my home data center. Currently I use a basic authentication system, given a username/password they see the website and then hopefully send me somewhat sane notes on what is broken or should be tweaked. As a better solution, I've written a simple proxy web server that does the above but allows me to have one DNS entry and then depending on credentials it makes an internal request relaying headers and re-writing URLS as needed. Every single html/text or application/* request is compressed and shoved into a sqlite table so I can partially replay what they've done. Now I am shifting to the frontend and would like to capture clicks, keydown's, and submits on everything on the page.

    Read the article

  • Most cost effective way to target multiple mobile platforms

    - by niidto
    Hi I have been given the tasks of speccing a mobile application, which will need to run on approx. 1000 devices. These devices already exist, and consist of iPhones, BlackBerrys, Androids, Windows Mobile and Netbooks. The application will have simple reporting capability, and a collection of forms. Anyway, the obvious solution would be to develop some browser based solution, although given the occasionally connected nature of the devices, there's a potential for data to get lost / not saved. So instead of creating a complex application for each platform, I was thinking we could build what is effectively a form generator, with basic offline storage capability (text files), designed to run on each device, and have the device generate a form, based on for example an XML file that it could request from a server somewhere, resulting in minimal specialist development costs, and the ability to run most of the logic from the server end, with the devices being dumb clients that render forms and upload the data when there is an available connection. Anyway, my question summarised is, how have you made the decision on supporting multiple devices for your application. Is this always an unavoidable problem, and you just have to make the call to support 1 or 2, or pay for developers to write code for each platform, or alternatively supply pre-installed devices to the company? Many thanks James

    Read the article

  • Double hashing passwords - client & server

    - by J. Stoever
    Hey, first, let me say, I'm not asking about things like md5(md5(..., there are already topics about it. My question is this: We allow our clients to store their passwords locally. Naturally, we don't want them stored in plan text, so we hmac them locally, before storing and/or sending. Now, this is fine, but if this is all we did, then the server would have the stored hmac, and since the client only needs to send the hmac, not the plain text password, an attacker could use the stored hashes from the server to access anyone's account (in the catastrophic scenario where someone would get such an access to the database, of course). So, our idea was to encode the password on the client once via hmac, send it to the server, and there encode it a second time via hmac and match it against the stored, two times hmac'ed password. This would ensure that: The client can store the password locally without having to store it as plain text The client can send the password without having to worry (too much) about other network parties The server can store the password without having to worry about someone stealing it from the server and using it to log in. Naturally, all the other things (strong passwords, double salt, etc) apply as well, but aren't really relevant to the question. The actual question is: does this sound like a solid security design ? Did we overlook any flaws with doing things this way ? Is there maybe a security pattern for something like this ?

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >