Search Results

Search found 6326 results on 254 pages for 'continuous operation'.

Page 92/254 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Documenting a REST interface with a flowchart

    - by James Kassemi
    Does anybody have suggestions on creating a flowchart representation of a REST-style web interface? In the interest of supplying thorough documentation to co-developers, I've been toying around in dia modeling the interface for modifying and generating a product resource: This particular system begins to act differently with user authentication/resource counts, so before I make modifications, I'm looking for some clarification: Complexity: how would you simplify the overall structure to make this easier to read? Display Symbol: is this appropriate for representing a page? Manual Operation Symbol: is this appropriate for representing a user action like a button click? Any other suggestions would be greatly appreciated. My apologies for the re-post. The main stackexchange site suggested this question was better presented on programmers.

    Read the article

  • What is the benefit of writing to a temp location, And then copying it to the intended destination?

    - by Devdatta Tengshe
    I am writing an application that works with satellite Images, and my boss asked me to look at some of the commercial application, and see how they behave. I found a strange behavior and then as I was looking, I found it in other standard applications as well. These Programs first write to the temp folder, and then copy it to the intended destination. Example: 7zip first extracts to the temp folder, and then copies the extracted data to the location that you had asked it to extract the data to. I see several problems with this approach: 1.The temp folder might not have enough space, while the intended location might have that much space. 2.If it is a large file, it can take a non-negligible amount of time for the copy operation. I thought about it a lot, but I couldn't see one single positive point to doing this. Am I missing something, or is there a real benefit to doing this?

    Read the article

  • Why isn't there a python compiler to native machine code?

    - by user2986898
    As I understand, the cause of the speed difference between compiled languages and python is, that the first compiles code all way to the native machine's code, whereas python compiles to python bytecode, to be interpreted by the PVM. I see that this way python codes can be used on multiple operation system (at least in most cases), however I do not understand, why is not there an additional (and optional) compiler for python, which compiles the same way as traditional compilers. This would leave to the programmer to chose, which is more important to them; multiplatform executability or performance on native machine. In general; why are not there any languages which could be behave both as compiled and interpreted?

    Read the article

  • Best Practices - Dynamic Reconfiguration

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains) Overview of dynamic Reconfiguration Oracle VM Server for SPARC supports Dynamic Reconfiguration (DR), making it possible to add or remove resources to or from a domain (virtual machine) while it is running. This is extremely useful because resources can be shifted to or from virtual machines in response to load conditions without having to reboot or interrupt running applications. For example, if an application requires more CPU capacity, you can add CPUs to improve performance, and remove them when they are no longer needed. You can use even use Dynamic Resource Management (DRM) policies that automatically add and remove CPUs to domains based on load. How it works (in broad general terms) Dynamic Reconfiguration is done in coordination with Solaris, which recognises a hypervisor request to change its virtual machine configuration and responds appropriately. In essence, Solaris receives a message saying "you now have 16 more CPUs numbered 16 to 31" or "8GB more RAM starting at address X" or "here's a new network or disk device - have fun with it". These actions take very little time. Solaris then can start using the new resource. In the case of added CPUs, that means dispatching processes and potentially binding interrupts to the new CPUs. For memory, Solaris adds the new memory pages to its "free" list and starts using them. Comparable actions occur with network and disk devices: they are recognised by Solaris and then used. Removing is the reverse process: after receiving the DR message to free specific CPUs, Solaris unbinds interrupts assigned to the CPUs and stops dispatching process threads. That takes very little time. primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 1.0% 6d 22h 29m ldom1 active -n---- 5000 16 8G 0.9% 6h 59m primary # ldm set-core 5 ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.2% 6d 22h 29m ldom1 active -n---- 5000 40 8G 0.1% 6h 59m primary # ldm set-core 2 ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 1.0% 6d 22h 29m ldom1 active -n---- 5000 16 8G 0.9% 6h 59m Memory pages are vacated by copying their contents to other memory locations and wiping them clean. Solaris may have to swap memory contents to disk if the remaining RAM isn't enough to hold all the contents. For this reason, deallocating memory can take longer on a loaded system. Even on a lightly loaded system it took several 7 or 8 seconds to switch the domain below between 8GB and 24GB of RAM. primary # ldm set-mem 24g ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.1% 6d 22h 36m ldom1 active -n---- 5000 16 24G 0.2% 7h 6m primary # ldm set-mem 8g ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.7% 6d 22h 37m ldom1 active -n---- 5000 16 8G 0.3% 7h 7m What if the device is in use? (this is the anecdote that inspired this blog post) If CPU or memory is being removed, releasing it pretty straightforward, using the method described above. The resources are released, and Solaris continues with less capacity. It's not as simple with a network or I/O device: you don't want to yank a device out from underneath an application that might be using it. In the following example, I've added a virtual network device to ldom1 and want to take it away, even though it's been plumbed. primary # ldm rm-vnet vnet19 ldom1 Guest LDom returned the following reason for failing the operation: Resource Information ---------------------------------------------------------- ----------------------- /devices/virtual-devices@100/channel-devices@200/network@1 Network interface net1 VIO operation failed because device is being used in LDom ldom1 Failed to remove VNET instance That's what I call a helpful error message - telling me exactly what was wrong. In this case the problem is easily solved. I know this NIC is seen in the guest as net1 so: ldom1 # ifconfig net1 down unplumb Now I can dispose of it, and even the virtual switch I had created for it: primary # ldm rm-vnet vnet19 ldom1 primary # ldm rm-vsw primary-vsw9 If I had to take away the device disruptively, I could have used ldm rm-vnet -f but that could disrupt whoever was using it. It's better if that can be avoided. Summary Oracle VM Server for SPARC provides dynamic reconfiguration, which lets you modify a guest domain's CPU, memory and I/O configuration on the fly without reboot. You can add and remove resources as needed, and even automate this for CPUs by setting up resource policies. Taking things away can be more complicated than giving, especially for devices like disks and networks that may contain application and system state or be involved in a transaction. LDoms and Solaris cooperative work together to coordinate resource allocation and de-allocation in a safe and effective way. For best practices, use dynamic reconfiguration to make the best use of your system's resources.

    Read the article

  • How can I have the passphrase for a private key remembered for a user?

    - by Jon Cram
    I have a collection of web services running on Ubuntu Server 12.04 that pull code from a github repository. These services run under a specific user (let's call that user 'example'). In /home/example/.ssh/is_rsa is the private key associated with the relevant github account. When performing an operation such as git pull I am greeted with: Enter passphrase for key '/home/simplytestable/.ssh/id_rsa':. Enter the correct password and all is ok. The same private key is present on local development Ubuntu Desktop 12.04 machines and no passphrase is asked for. I'd like to be able to have the passphrase remembered so that upon entering it once it is never asked for again. This will aid in automating various web service updates. I'm guessing that the passphrase needs to be stored in the relevant user's keychain such that I don't have to enter it every time the private key needs to be unlocked. How can I achieve this?

    Read the article

  • Do CDNs work with POST operations?

    - by iddqd
    I'm using a CDN (Level3) for the first time and I'm a bit confused. I'm accessing dynamic URLs such as http://cdn.mysite.com?getItem=1234 that return text data. Do CDNs work with HTTP POST operations? When i issue a HTTP POST operation, my "real" server receives this request every time, so I'm wondering if the CDN has a problem with POST operations. If i use HTTP GET it seems to work, i call the URL once (from my application), i can see my server receiving the request. If i call it a second time, the CDN delivers it directly, my server doesn't get anything. However if i open same the link manually from a second browser tab, my server is asked to deliver again, shouldn't it be cached by now? Many thanks.

    Read the article

  • MERGE gives better OUTPUT options

    - by Rob Farley
    MERGE is very cool. There are a ton of useful things about it – mostly around the fact that you can implement a ton of change against a table all at once. This is great for data warehousing, handling changes made to relational databases by applications, all kinds of things. One of the more subtle things about MERGE is the power of the OUTPUT clause. Useful for logging.   If you’re not familiar with the OUTPUT clause, you really should be – it basically makes your DML (INSERT/DELETE/UPDATE/MERGE) statement return data back to you. This is a great way of returning identity values from INSERT commands (so much better than SCOPE_IDENTITY() or the older (and worse) @@IDENTITY, because you can get lots of rows back). You can even use it to grab default values that are set using non-deterministic functions like NEWID() – things you couldn’t normally get back without running another query (or with a trigger, I guess, but that’s not pretty). That inserted table I referenced – that’s part of the ‘behind-the-scenes’ work that goes on with all DML changes. When you insert data, this internal table called inserted gets populated with rows, and then used to inflict the appropriate inserts on the various structures that store data (HoBTs – the Heaps or B-Trees used to store data as tables and indexes). When deleting, the deleted table gets populated. Updates get a matching row in both tables (although this doesn’t mean that an update is a delete followed by an inserted, it’s just the way it’s handled with these tables). These tables can be referenced by the OUTPUT clause, which can show you the before and after for any DML statement. Useful stuff. MERGE is slightly different though. With MERGE, you get a mix of entries. Your MERGE statement might be doing some INSERTs, some UPDATEs and some DELETEs. One of the most common examples of MERGE is to perform an UPSERT command, where data is updated if it already exists, or inserted if it’s new. And in a single operation too. Here, you can see the usefulness of the deleted and inserted tables, which clearly reflect the type of operation (but then again, MERGE lets you use an extra column called $action to show this). (Don’t worry about the fact that I turned on IDENTITY_INSERT, that’s just so that I could insert the values) One of the things I love about MERGE is that it feels almost cursor-like – the UPDATE bit feels like “WHERE CURRENT OF …”, and the INSERT bit feels like a single-row insert. And it is – but into the inserted and deleted tables. The operations to maintain the HoBTs are still done using the whole set of changes, which is very cool. And $action – very convenient. But as cool as $action is, that’s not the point of my post. If it were, I hope you’d all be disappointed, as you can’t really go near the MERGE statement without learning about it. The subtle thing that I love about MERGE with OUTPUT is that you can hook into more than just inserted and deleted. Did you notice in my earlier query that my source table had a ‘src’ field, that wasn’t used in the insert? Normally, this would be somewhat pointless to include in my source query. But with MERGE, I can put that in the OUTPUT clause. This is useful stuff, particularly when you’re needing to audit the changes. Suppose your query involved consolidating data from a number of sources, but you didn’t need to insert that into the actual table, just into a table for audit. This is now very doable, either using the INTO clause of OUTPUT, or surrounding the whole MERGE statement in brackets (parentheses if you’re American) and using a regular INSERT statement. This is also doable if you’re using MERGE to just do INSERTs. In case you hadn’t realised, you can use MERGE in place of an INSERT statement. It’s just like the UPSERT-style statement we’ve just seen, except that we want nothing to match. That’s easy to do, we just use ON 1=2. This is obviously more convoluted than a straight INSERT. And it’s slightly more effort for the database engine too. But, if you want the extra audit capabilities, the ability to hook into the other source columns is definitely useful. Oh, and before people ask if you can also hook into the target table’s columns... Yes, of course. That’s what deleted and inserted give you.

    Read the article

  • Evolution has no access to couchdb

    - by berkes
    Evolution gives the error "Cannot open addressbook". "We were unable to open this addressbook. This either means you have entered an incorrect URI, or the server is unreachable". "Details: Operation not permitted". (rough translation from Dutch). Enabling verbose logging in (desktop)couchdb tells me roughly the same: [info] [<0.7875.1>] 127.0.0.1 - - 'PUT' /contacts/ 400 [debug] [<0.7875.1>] httpd 400 error response: {"error":"invalid_consumer","reason":"Invalid consumer (key or signature method)."} It seems that evolution tries to fetch the contacts, then couchdb denies access, and then evolution fails to do a proper oauth. This is on Ubuntu 10.10, with its default dektopcouch 1.0.1. Any hints on where to start would be most appreciated :)

    Read the article

  • Undeploy multiple SOA composites with WLST or ANT by Danilo Schmiedel

    - by JuergenKress
    As part of our current project the Build Management team asked for a solution to undeploy multiple composites at one time. Of course you have the “Undeploy All from This Partition” menu option in Enterprise Manager but since we have a lot of deployments every day the guys wanted to have a script solution. It is even more important for the nightly deployments on our continuous integration environment – strange, we couldn’t find anybody who wants to do the undeployment via Enterprise Manager manually every night ;-) However with WLST or ANT the SOA Suite comes with two options to undeploy composites via script. In this article I’d like to explain you both ways. Undeployment with WLST You can test the steps below on Oracle's Pre-built Virtual Machine for SOA Suite and BPM Suite 11g. Change to the WLST directory under MIDDLEWARE_HOME/Oracle_SOA1/common/bin. cd /oracle/fmwhome/Oracle_SOA1/common/bin/ Open WLST ./wlst.sh Connect to the SOA server Read the full article by Danilo Schmiedel SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: undeploy soa,Danilo Schmiedel,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Order independent transparency in particle system

    - by Stepan Zastupov
    I'm writing a particle system and would like to find a trick to achieve proper alpha blending without sorting particles because: Each particle is a point sprite in a single mesh and I can't use scene graph ability to sort transparent nodes. The system node should be properly sorted, though. Particle position is computed on shader from initial velocity, acceleration and time. In order to sort the system I would have to perform all this computations on CPU, which is something I want to avoid. Sorting hundreds of particles against camera position and uploading it on GPU each frame seams to be quiet heavy operation. Alpha testing seems to be fast enough on GLES 2.0 and works fine for non-transparent but "masked" textures. Still, it's not enough for semi-transparent particles. How would you handle this?

    Read the article

  • Deux chercheurs transforment Kinect en assistant pour chirurgien, grâce à des drivers open-source

    Deux chercheurs transforment Kinect en assistant pour chirurgien, grâce à des drivers open-source Mise à jour du 24.12.2010 par Katleen Et une nouvelle exploitation de Kinect, une ! Celle-ci a été mise sur point en Suisse, par deux chercheurs de l'Université de médecine de Berne. Partant du principe qu'en cours d'opération, lorsqu'un chirurgien à besoin d'informations relatives au dossier de son patient ou à l'intervention en cours, il lui faut se lancer dans un protocole lui faisant perdre un temps précieux : enlever ses gants, aller à l'ordinateur, empoigner la souris, naviguer grâce à elle, puis remettre ses gants avant de reprendre le travail. Or, ces précieuses secondes peuvent de...

    Read the article

  • Asynchronous update design/interaction patterns

    - by Andy Waite
    These days many apps support asynchronous updates. For example, if you're looking at a list of widgets and you delete one of them then rather than wait for the roundtrip to the server, the app can hide the one you deleted, giving immediate feedback. The actual deletion on the server will happen in the background. This can be seen in web apps, desktop apps, iOS apps, etc. But what about when the background operation fails. How should you feed back to the user? Should you restore the UI to the pre-deletion state? What about when multiple background operations fail together? Does this behaviour/pattern have a name? Perhaps something based on the Command pattern?

    Read the article

  • Is it safe to run upgrade with Pinned Applications in Synaptic

    - by BlueXrider
    I have firefox, thunderbird, thunderbird-locale-en, thunderbird-locale-en-us, xul-ext-calendar-timezones, xul-ext-gdata-provider and xul-ext-lightning pinned in Synaptic. When I run apt-get upgrade. I get the following The following packages will be upgraded: boot-repair boot-sav darktable firefox libvlc5 libvlccore5 thunderbird thunderbird-locale-en thunderbird-locale-en-us vlc vlc-data vlc-nox vlc-plugin-notify vlc-plugin-pulse xul-ext-calendar-timezones xul-ext-gdata-provider xul-ext-lightning 17 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 53.0 MB of archives. After this operation, 606 kB of additional disk space will be used. Do you want to continue [Y/n]? Are these packages really going to be upgraded?

    Read the article

  • Topeka Dot Net User Group (DNUG) Meeting &ndash; April 6, 2010

    - by Robz / Fervent Coder
    Topeka DNUG is free for anyone to attend! Mark your calendars now! SPEAKER: Troy Tuttle is a self-described pragmatic agilist, and Kanban practitioner, with more than a decade of experience in delivering software in the finance and health industries and as a consultant. He advocates teams improve their performance through pursuit of better practices like continuous integration and automated testing. Troy is the founder of the Kansas City Limited WIP Society and is a speaker at local area groups on team related topics. He currently works as a Project Lead Consultant with AdventureTech Group of Kansas City, KS. TOPIC: Why Kanban? Kanban is receiving a large amount of attention recently. What does it offer compared to other approaches? Answering that question may require you to hit the “reset” button on previously held biases and assumptions. Kanban blends Lean thought with ideas from first generation agile methodologies. To get started with Kanban, we will examine what steps are necessary to establish a transparent, work-limited, pull system. We will highlight the perils of allowing too much work-in-progress and how it affects development performance. Once established, Kanban teams need only a few metrics and tools to monitor their performance and improvement. WHERE: Federal Home Loan Bank Topeka on the Security Benefit Campus – Directions? WHEN: 11:30 AM - 1:00 PM on April 6th, 2010 REGISTER: http://topekadotnet.wufoo.com/forms/topeka-dnug-meeting-attendance/ ADDITIONAL INFO: As always, please sign in and out of FHLBank to help them with their accountability. Please park in the visitors section at the front of the building when you arrive. If  there are no spots in visitors you may park in the overflow lot at the far east end of the facility.  Lunch will be provided and we will have some great door prizes!

    Read the article

  • The latest Oracle Social Network News from Open World

    - by me
    Highlights Oracle and Partners showcase the latest development around  Oracle Social Network  (OSN) Integration of OSN Social Fabric into Business Applications like Finance, HCM and Customer Experience Partners like Cisco WebEx, Avaya, Weemo, Lingotek and HarQen showcase OSN integration Oracle shares details around internal OSN deployment Please visit us at 2413 Moscone South  Exhibition Hall  and  experience a live OSN demo Social Fabric  Oracle Social Network socializes your Applications, Process and Content within your Enterprise. Here are some examples what is shown at Oracle Open World. Socialize the Finance department Enable Finance departments to collaborate instantly during quarter close with real-time information access Enable finance professionals in the back office to easily interact with the rest of the company Provide privacy when discussing sensitive financial results within Conversations  Socialize Human Capital Management (HCM) Promotes attainable performance goals that achieve the business objectives of the enterprise Capture expertise across the network Continuous feedback loop provided that results in productivity and innovation improvement tied to higher employee engagement OSN and Customer Experience Find the person with the best skills to assist with the issue Real-time collaboration in  context of the issue Track an Agent’s collaboration contributions Identify and contribute relevant knowledge back to the system Cisco/Webex integration The Web Conferencing tool of your choice can be integrated with OSN. In the example below you can see the integration of the Cisco WebEx solution into OSN. and sure - this works on mobile devices as well  OSN @ Oracle Oracle has deployed OSN as part of the internal Fusion CRM application rollout. After just 4 month we can see impressive usage patterns.

    Read the article

  • Prevent shutdown when rsnapshot is running

    - by highsciguy
    Since shutdowns during rsnapshot operation will lead to inconsistent/partial backups, I wonder how to delay the system shutdown while rsnapshot is active. The task is complicated by the fact that I need a solution which is compatible with non-expert users. I.e. I need to tell reliably to the user that he needs to wait until the process is finished and not to do a hard reset. Once this is the case shutdown should continue. A possible solution could be to replace the action of the window managers (mostly KDE) shutdown/restart/hibernate buttons by a script which first checks if rsync is active and shows a message if this is the case. But I do not know if this is possible in KDE.

    Read the article

  • Best practice for bulk eCommerce product upload?

    - by Or W
    I'm thinking about opening a large online store for Jewelry, the one thing that really bothers me is managing the actual operation of taking pictures, uploading and describing all the products. I'm trying to figure out the best way to do it, in terms of performance or the least time consuming. Just a few things to keep in mind I'll have over 1,000 items in the online store I'll have 3-4 pictures for each item, I'm using a DSLR camera if it makes any difference. I'm going to probably use Magento, unless you have better experience with another eCommerce platform that will help me get this done quickly. I'll need to randomly(?) create a product code for each item.

    Read the article

  • Keeping files that are often changed in sync between desktop and laptop

    - by N.N.
    I'm looking for a way to keep a desktop and a laptop in sync. What I want to keep in sync are some folders, mainly ~/Documents, that are changed often when working on them. If it matters I can connect to my desktop from anywhere via an URL but my laptop is harder to access since it might be behind NAT and such. I have been looking at Ubuntu One but it seems to not go well with working on documents written in LaTeX. If I work on a .tex file in the Ubuntu One directory and compile it (with pdflatex) every now and then (as often as every 10 sec when working) it will create several new files including a pdf which are uploaded to Ubuntu One and this seems stupid since it will create continuous upload when working on .tex files. I also usually keep .tex documents version controlled by git and then every commit (which also can happen frequently) will cause upload (by changes in ./.git) so that it happens continuously when working. Another example is editing images that are saved often. What I think would be best is for sync to happen every tenth minute or at the end of every working session (but there might be some other way to handle this?).

    Read the article

  • What is the basic loadout for an open source web developer?

    - by DeveloperDon
    Thus far, I have mainly been an embedded developer, but I am interested in having the flexibility to do mobile and web development as well. I think my tools should include the following, but probably a lot more. LAMP stack. Java IDEs like Eclipse and IntelliJ. JS frameworks like Dojo, Node.JS, AngularJS, (is it better to mix or commit to one?). Cloud solutions like EC2 and Azure (again, ok to mix or better to commit to one?). Google APIs. Continuous integration server. Source control tools with Git for new work, SVN, CVS, +others for imports. FTP server. Unit test runners. Bug trackers. OOAD modeling tools or plug-ins? Graphic design tools? Hosting services. XML / JSON / other markup? Content management, SEO? I am also interested to know if there are tools where it might be better to mix, match, or support all available (maybe for source control) and others where the full focus should be on one (maybe Java vs. C# or Windows vs. Linux vs. MacOS). Perhaps some of these questions need context of whether the projects will be greenfield (just pick favorite) or maintenance (no choice, each project continues legacy, sometimes with a poor tools).

    Read the article

  • sbackup: can not mount FTP automatically

    - by ledy
    In the sbackup configuration GUI i set the ftp://user:pw@online/storage and it's marked as successfully connected. After daily backup time, I checked the ftp and it was empty. In the error mail it says: Error in _do_mount: volume doesn't implement mount [ERROR_NOT_SUPPORTED - Operation not supported for the current backend.] Unable to mount: volume doesn't implement mount File access manager not initialized When restarting the sbackup GUI, it is no longer connected to the ftp and i have to click the button again to connect the remote directory - although it still knows my user/pw. How to save this permanently?

    Read the article

  • What framework & technology would you use to make a site like fancy.com [on hold]

    - by adriancdperu
    Im about to start a 3 month process to build something similar to fancy.com. What are your opinions about: best language + framework? (server - side) = I was thinking about LAMP and CakePHP important technological issues to consider when developing? MAU assumption = 1 million. Main features: Registration with Facebook Normal registration Search articles Like articles Post articles Comment on articles Suggest articles to friends via mail, Facebook, Twitter and about 3 or 4 more apis Ranking of articles as a cron job done every minute, many criterias, many rankings Follow users Datamining users to mail everyday with articles they have high probability of liking Operation tools for admins to add articles and close user accounts Mobile focus

    Read the article

  • How to prevent system to generate log file

    - by shantanu
    My Question is little bit surprising, but i need it. I am using a slow processor laptop, now i found that HDD has some bad sectors and HDD response becomes slow. But disk health is ok(according to smart tools). I can not change my HDD right now. So decide to reduce disk operation. How do i prevent system to generate log file or any other file which are used to keep history? I know LOG file is very important but i don't care it right now. Please help.

    Read the article

  • Service Layer - how broad should it be, and should it be used also on the local application?

    - by BornToCode
    Background: I need to build a main application with some operations (CRUD and more) (-in winforms), I need to make another application which will re-use some of the functions of the main application (-in webforms). I understood that using service layer is the best approach here. If I understood correctly the service should be calling the function on the BL layer (correct me if I'm wrong) The dilemma: In my main winform UI - should I call the functions from the BL, or from the service? (please explain why) Should I create a service for every single function on the BL even if I need some of the functions only in one UI? for example - should I create services for all the CRUD operations, even though I need to re-use only update operation in the webform? YOUR HELP IS MUCH APPRECIATED

    Read the article

  • Unable to download microsoft excel files from a IIS SSL site

    - by Jeffrey
    The web master at my corporation added SSL to the web site and now none of my users can download Microsoft word and xcel files the sites generates. According to Microsoft the following must be down. Web sites that want to allow this type of operation should remove the no-cache header or headers. Typical of MS they don't tell you what to do, how to do it, or what the best practice is. The web master says its a web config setting. But all i can finds is <configuration> <appSettings/> <connectionStrings/> <system.web> <httpRuntime sendCacheControlHeader="false"/> and I don't know if this is the best way to achieve the result. I would greatly appreciate some advice on this subject.

    Read the article

  • Fixed Sized Buffer or Variable Buffers with C# Sockets

    - by Keagan Ladds
    I am busy designing a TCP Server class in C# that has events and allows the user of the class to define packets that the server can send a receive by registering a class that is derived from my "GenericPacket" class. My TCPListener uses Async methods such as .BeginReceive(..); My issue is that because I am using the .BeginReceive(); I need to specify a buffer size when I call the function. This means I cant read the whole packet if one of my defined packets is too big. I have thought of creating a fixed sized Header that gets read using .BeginRead(); and the read the rest using Stream.Read(); but this will lead to the whole server having to wait for this operation to complete. I would like to know if anyone has come across this before and I would appreciate any suggestions.

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >