Search Results

Search found 7669 results on 307 pages for 'dealing with clients'.

Page 120/307 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • GUI app for Bandwidth Shaping (Internet Speed Limiter)

    - by Luis Alvarado
    This is a similar question as this one: Limit internet bandwidth but in this case I am looking for a GUI app. The other question was 2+ years ago so a GUI might be available that I could use to not only monitor but also change the maximum speed at which clients can download. A WebApp or a GUI app will help. It needs to work on 12.04 or newer. Just to give an idea, I am looking for something similar to NetLimiter that gives me the control to know: How much Download/Upload speed an IP/Mac has (Assuming this is a LAN) Limit the Download/Upload speed for an IP/Mac. Option to say I what time a speed can be limited or not. Can cap when a certain amount has been downloaded/uploaded (Like 250MB per day)

    Read the article

  • Vers des serveurs embarquant des puces ARM basses consommations ? Crédible, souhaitable ou irréalist

    Vers des serveurs embarquant des puces ARM basses consommations ? Crédible, souhaitable ou irréaliste ? ARM est connu pour ses puces basses consommations qui équipent bons nombres de terminaux mobiles allant des téléphones portables aux netbooks. Mais le consortium, basé à Cambridge, aurait d'autres projets dans ses cartons. C'est en tout cas ce que laisse entendre son directeur Marketing, Ian Drew, qui vient de révéler qu'un site test (le Linux Internet Platform) utilisait un serveur embarquant une puce ARM depuis environ un an. Ce test ferait suite à plusieurs demandes de la part de clients particulièrement intéressés par des les économies d'énergie que pou...

    Read the article

  • Le prix de l'abonnement Internet en fonction de la consommation réelle serait envisagé, vers une fin des offres illimitées ?

    France : Le forfait internet au Go fixe est un scénario envisageable, bientôt la fin des offres illimitées ?Dans les prochains mois, il n'est pas exclu de voir se mettre en place une nouvelle législation autour de la facturation internet comme c'est déjà le cas en Allemagne où les forfaits dans le fixe sont déjà segmentés en fonction des protocoles (P2P, VoIP?) ou de la consommation générale.Depuis le mois d'avril dernier, Deutsche Telekom, l'opérateur principal en Allemagne, a en effet déjà imposé à ses nouveaux clients xDSL et fibre des quotas. Passé un certain seuil (75 Go pour les abonnés ADSL, 200 Go pour le VDSL, 300/400 Go pour les abonnés FTTH), les abonnés voient désormais leur débit sombrer à que...

    Read the article

  • Silverlight 4 + RIA Services - Ready for Business: Ajax Endpoint

    Continuing in our series, I wanted to touch on how a RIA Services can be exposed  your service in JSON.  This is very handy for Ajax clients.   The great thing is that enabling the JSON endpoint is that it requires NO changes whatsoever to the DomainService.  All you need to do is enable it is to add the JSON endpoint in web.config 1: <system.serviceModel> 2: <domainServices> 3: <endpoints> 4: <add name="JSON" 5:...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Where to go from here, how to improve / learn more

    - by bExplosion
    I finished University around 4 years ago double degree in Software Eng/Comp Sci. Got my first job at a startup in my final year, was with them for 2.5 years then started my own business. So far everything is going great, lots of clients and stead work etc, but coming right out of uni and into a start up I never had any form or senior software engineer guiding my work or suggesting improvements etc... Whats the best way for me to improve & learn more? Books? MS Exams? Other? I develop in C#, ASP.NET/MVC. Update The problem isn't really with releasing products, I've released quite a few which are up and running with customers happy. It's more with quality of code, best practices, how do I know something I am code is correct, it may work but there may be ways of coding it much more efficiently or by adhering to some kind of standard Cheers for any responses! Matt

    Read the article

  • Suggest windows webhost provider for following requirements.

    - by op_amp
    Hi, We have a asp.net MVC3 based web app which uses SQL SERVER 2008 for database. Also, we have a client side desktop application which also uses SQL SERVER 2008. While developing the system, we are able to Sync tables using SQL SERVER Replication feature. Now, we want to host our site on a webserver but we are clueless about it. If anyone of you have a similar system working then please suggest a cheap but reliable webhost which supports Replication. Initially there will be approximately 10 or less clients who will perform replication 2 or 3 times a day. The size of the database will be less than 4GB for sure.

    Read the article

  • DMCA in Europe - EUCD?

    - by rlcabral
    There is a site to where I need to send a takedown notice. However, this site is hosted in Europe with an offshore hosting company that says clear in its TOS that they will not do anything if they get a DMCA complain, giving freedom to clients to host whatever copyrighted material they want. Is EUCD the correct way to deal with this? Where can I find an example of a EUCD complain or even a form? DMCA has all types of examples and places to get a sample form, but EUCD has none.

    Read the article

  • Releasing patches and updates to web service users

    - by Kalidoss.M
    I have written one web services using Java. Its already live (Up & Running). During development I have SVN(repository) + Jira for task maintenance + Maven for building the web services. Now i have some small update for my web services and i have created that task in Jira and committed the files in svn with respect to Jira-Id after all testing, etc.. Say my web services is used by 10 clients, we did not give our source code to them. Is there any steps/procedure available to release patch/updates? Is there any way to render/create the change log at the build time (maven). How do i manage the change log for all version or Patch updates during build time? (Automatically)

    Read the article

  • Does anyone have thoughts/experiences on the IT division of Accenture? I just got a job offer from them.

    - by accenturejob
    Hi everyone, this is my first post here. As the title says, I just got a job offer for an entry level Technology Analyst role at Accenture, which is a very large consulting company. I'm a recent college graduate, and this would be my first "real" job out of school. I'm wondering if any of you guys have any experiences/insights/opinions on Accenture as a company, specifically, the Security or IT Strategy divisions of its Technology consulting branch. What do you think of the people there, the management, the clients, etc? Thanks a lot; hopefully this will help me make a decision.

    Read the article

  • Help! GUI design tool for web & windows applications (2 replies)

    Hi All, We are currently looking to buy a Windows &amp; Web GUI design tool. Our company is doing softwares for Windows &amp; Web application with Visual Studio 2008. Our clients will be using this tool to build screenshots. Projects delivery should be faster with this tool. Do you have any suggestions ? Of course, a Microsoft Parter product is better... Thanks for your help! Simon Levesque Sonim6 Inc.

    Read the article

  • Database in the cloud?

    - by Jlouro
    Some of my recent clients are asking for remote connections to the office server, for standalone work, etc, in winForm applications. Since the concept of the web is remote connection to a server both of data and resources, it should be possible to place both of this in cloud and have the winForm apps connect to it as if web Apps. As any one tested this, is working like this? Is it fast enough? Is it secure? What is the best cloud host for this type of work ?

    Read the article

  • Exciting DBA and BI role in London for fast growing startup

    - by simonsabin
    One of my clients is looking for a DBA and a BI developer. They are a very exciting dotcom company with cutting edge technology and are growing fast  A bit older than a startup but they still have that feel about them. They are based in North London and are a very nice company to work for, flexible hours, working from home. Plus they are willing to pay for the right candidate. There is at least 1 DBA and 1 BI role going. If you are interested then let me know http://sqlblogcasts.com/blogs/simons...(read more)

    Read the article

  • Google devient une entreprise plus chère que Microsoft, changement d'époque ou sous-évaluation du premier éditeur mondial ?

    Google devient une entreprise plus chère que Microsoft Changement d'époque ou signe que les marchés sous-évaluent le premier éditeur mondial ? Le fait est simple. Google a aujourd'hui une capitalisation boursière supérieure à celle de Microsoft. Son interprétation, elle, est beaucoup plus complexe. Certains y verront un changement de paradigme (vers le 100% Web, l'abonnement, le contenu et le mobile). D'autres, le signe que les marchés sous-estiment l'assise de Microsoft dont le portefeuille produits et les clients sont beaucoup plus diversifiés. Le Wall Street Journal pour sa part remarque que l'action de Google était au plus bas depuis 4 ans au début de l'été. Les analyst...

    Read the article

  • 1,5 million de comptes Facebook en vente, un pirate russe fait le bonheur des réseaux de phishing

    1,5 million de comptes Facebook en vente, un pirate russe fait le bonheur des réseaux de phishing «Kirllos», un hacker russe, vient de mettre en vente 1.5 million de comptes Facebook sur un forum est-européen. Et, période de soldes oblige, il propose des prix de gros avec des tarifs attractifs : 25 dollars les 1.000 comptes avec moins de 10 amis, 45 dollars les 1.000 avec plus de 10 contacts. Il est vraisemblable que les "utilisateurs" avec très peu ou pas de contacts aient été crées par ses soins, et les autres compromis avec un vol de mot de passe. Le pirate semble agir seul, mais les spécialistes se penchant sur le cas n'excluent pas qu'il puisse n'être qu'un intermédiaire. Quant aux clients se ruant sur...

    Read the article

  • What's the most productive coding environment

    - by Ubiguchi
    I was speaking with an ex-colleague the other day about the most productive way to write code and he said he found it best "to CIMP, or Code In My Pants". When I asked him exactly what he meant, he explained he found it best to work at home, coding at his own pace, dressed comfortably (in his pants), and communicating with his team through emails, IM, or the telephone. Digesting his approach (which he describes to clients as the Complete Integrated Method of Programming), I realised my coding is also more productive when working in an isolated environment, which made me wonder if the software industry has got it all wrong and should development be really done by dispersed teams of individuals, or are there advantages to geographical herding that make up for the added interruptions it brings? So has business got it wrong? Should development occur predominantly across geographically isolated individuals to increase productivity, or are there real reasons why herding developers together makes sense?

    Read the article

  • possible to use an IP derived from Dynamic DNS in htaccess IP allow/deny commands?

    - by user115745
    On a website I manage, I want to use an .htaccess file to allow access to a certain administrative directory only from my home IP address, which is dynamically assigned by my ISP and therefore changes -- not regularly, but it does happen. I also have an account from DynDNS and have one of the auto-update clients making sure it always points to my actual home IP address. I don't actually host anything at home; I just have set up the Dynamic DNS account. Is there any way to combine these features: that is, is it possible write the .htaccess allow/deny commands at my outside webhost in a way that my home IP address is not hard coded into the command, but instead is somehow derived from the Domain Name that the DynDNS has assigned me, by doing a real-time lookup every time the directory's .htaccess file is hit? Thank you.

    Read the article

  • Adobe confirme une attaque de son site Connectusers.com, ayant entraîné une fuite de 150 000 fichiers sur les utilisateurs

    Adode confirme une attaque de son site Connectusers.com ayant entrainé une fuite de 150 000 fichiers sur les utilisateurs Adode a confirmé que la base de données de son forum Connectusers.com a été compromise, exposant les mots de passe des utilisateurs. Cette réaction fait suite à la publication par un hacker du pseudo de « Virus HimA » sur le site Pastebin des informations sur les utilisateurs du forum. Ce hacker a déclaré qu'il avait à sa disposition près de 150 000 fichiers contenant des informations personnelles sur les clients, le personnel et les partenaires de l'éditeur. Pour appuyer ses propos, celui-ci a publié les fichiers contenant les adresses mails, identifiants et autres do...

    Read the article

  • Microsoft dévoile Dynamics AX 2012 R2, la prochaine évolution majeure de sa plateforme ERP sortira le 1er décembre

    Microsoft dévoile Dynamics AX 2012 R2 la prochaine évolution majeure de son outil ERP sortira le 1er décembre À l'occasion de la Dynamics AX Technical Conference 2012, Microsoft a présenté Dynamics AX 2012 R2, la prochaine évolution majeure de sa solution ERP (Enterprise Resource Planning). Cette version mettra à la disposition des clients dans 36 pays du monde (dont 11 nouveaux pays) des nouvelles fonctionnalités et outils dont ils ont besoin pour être plus agiles et améliorer leur gestion centralisée. Dynamics AX 2012 R2 a été développé essentiellement autour d'une meilleure prise en charge des capacités de chaque secteur d'activité. Ainsi, les entreprises industrielles auront de...

    Read the article

  • In a team practicing Domain Driven Design, should the whole team participate in Stakeholder meetings?

    - by thirdy
    In my experience, a Software Development Team that comprises: 1 Project Manager 1 Tech Lead 1 - 2 Senior Dev 2 - 3 Junior Dev (Fresh grad) Only the Tech Lead & PM (and/or Senor Dev/s) will participate in a meeting with Clients, Domain Experts, Client's technical resource. I can think of the ff potential pitfalls: Important info gets lost Human error (TL/PM might forgot to disseminate info due to pressure or plain human error) Non-verbal info (maybe a presentation using a diagram presented by Domain Expert) Maintaining Ubiquitous language is harder to build since not all team members get to hear the non-dev persons Potential of creative minds are not fully realized (Personally, I am more motivated to think/explore when I am involved with these important meetings) Advantages of this approach: Only one point of contact Less time spent on meetings? Honestly, I am biased & against this approach. I would like to hear your opinions. Is this how you do it in your team? Thanks in advance!

    Read the article

  • Exploring TCP throughput with DTrace (2)

    - by user12820842
    Last time, I described how we can use the overlap in distributions of unacknowledged byte counts and send window to determine whether the peer's receive window may be too small, limiting throughput. Let's combine that comparison with a comparison of congestion window and slow start threshold, all on a per-port/per-client basis. This will help us Identify whether the congestion window or the receive window are limiting factors on throughput by comparing the distributions of congestion window and send window values to the distribution of outstanding (unacked) bytes. This will allow us to get a visual sense for how often we are thwarted in our attempts to fill the pipe due to congestion control versus the peer not being able to receive any more data. Identify whether slow start or congestion avoidance predominate by comparing the overlap in the congestion window and slow start distributions. If the slow start threshold distribution overlaps with the congestion window, we know that we have switched between slow start and congestion avoidance, possibly multiple times. Identify whether the peer's receive window is too small by comparing the distribution of outstanding unacked bytes with the send window distribution (i.e. the peer's receive window). I discussed this here. # dtrace -s tcp_window.d dtrace: script 'tcp_window.d' matched 10 probes ^C cwnd 80 10.175.96.92 value ------------- Distribution ------------- count 1024 | 0 2048 | 4 4096 | 6 8192 | 18 16384 | 36 32768 |@ 79 65536 |@ 155 131072 |@ 199 262144 |@@@ 400 524288 |@@@@@@ 798 1048576 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 3848 2097152 | 0 ssthresh 80 10.175.96.92 value ------------- Distribution ------------- count 268435456 | 0 536870912 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 1073741824 | 0 unacked 80 10.175.96.92 value ------------- Distribution ------------- count -1 | 0 0 | 1 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 4 4096 | 9 8192 | 21 16384 | 36 32768 |@ 78 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5391 131072 | 0 swnd 80 10.175.96.92 value ------------- Distribution ------------- count 32768 | 0 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 131072 | 0 Here we are observing a large file transfer via http on the webserver. Comparing these distributions, we can observe: That slow start congestion control is in operation. The distribution of congestion window values lies below the range of slow start threshold values (which are in the 536870912+ range), so the connection is in slow start mode. Both the unacked byte count and the send window values peak in the 65536-131071 range, but the send window value distribution is narrower. This tells us that the peer TCP's receive window is not closing. The congestion window distribution peaks in the 1048576 - 2097152 range while the receive window distribution is confined to the 65536-131071 range. Since the cwnd distribution ranges as low as 2048-4095, we can see that for some of the time we have been observing the connection, congestion control has been a limiting factor on transfer, but for the majority of the time the receive window of the peer would more likely have been the limiting factor. However, we know the window has never closed as the distribution of swnd values stays within the 65536-131071 range. So all in all we have a connection that has been mildly constrained by congestion control, but for the bulk of the time we have been observing it neither congestion or peer receive window have limited throughput. Here's the script: #!/usr/sbin/dtrace -s tcp:::send / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @cwnd["cwnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd); @ssthresh["ssthresh", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd_ssthresh); @unacked["unacked", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_snxt - args[3]-tcps_suna); @swnd["swnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize((args[4]-tcp_window)*(1 tcps_snd_ws)); } One surprise here is that slow start is still in operation - one would assume that for a large file transfer, acknowledgements would push the congestion window up past the slow start threshold over time. The slow start threshold is in fact still close to it's initial (very high) value, so that would suggest we have not experienced any congestion (the slow start threshold is adjusted when congestion occurs). Also, the above measurements were taken early in the connection lifetime, so the congestion window did not get a changes to get bumped up to the level of the slow start threshold. A good strategy when examining these sorts of measurements for a given service (such as a webserver) would be start by examining the distributions above aggregated by port number only to get an overall feel for service performance, i.e. is congestion control or peer receive window size an issue, or are we unconstrained to fill the pipe? From there, the overlap of distributions will tell us whether to drill down into specific clients. For example if the send window distribution has multiple peaks, we may want to examine if particular clients show issues with their receive window.

    Read the article

  • Access Token Verification

    - by DecafCoder
    I have spent quite a few days reading up on Oauth and token based security measures for REST API's and I am currently looking at implementing an Oauth based authentication approach almost exactly like the one described in this post (OAuth alternative for a 2 party system). From what I understand, the token is to be verified upon each request to the resource server. This means the resource server would need to retrieve the token from a datastore to verify the clients token. Given this would have to happen upon every request I am concerned about the speed implications of hitting a datastore like MySQL or NoSQL upon every request just to verify the token. Is this the standard way to verify tokens by having them stored in a RDBMS or NoSQL database and retrieved upon each request? Or is it a suitable solution to have them cached (baring in mind that we are talking millions of users)?

    Read the article

  • Les cadres incapables de décompresser en vacances ? Selon Roambi et Zebaz, les managers consultent leurs données pros partout et tout le temps

    France : les cadres incapables de décompresser en vacances Selon Roambi et Zebaz, les managers consultent leurs données pros partout et tout le tempsA l'heure de la démocratisation des smartphones et des tablettes en entreprise, Roambi, spécialiste de la visualisation de données sur iPhone et iPad, et Zebaz, éditeur de solutions de bases de données commerciales BtoB en SaaS, ont mené une enquête auprès de décideurs français pour connaître leurs usages en matière de mobilité et d'accès à l'information. Les résultats de cette enquête ont été mis en perspective avec une étude menée par Roambi auprès de ses clients aux US.Principal enseignement : « les cadres français de plus en plus nomades, sont devenus a...

    Read the article

  • Display large amount of data to client through pagination

    - by ebram tharwat
    I have a web application in which i need to show a big number of data or records for clients. Now i 'll use pagination but i was wondering should I: Load all the data once then pagination, sorting and sarching 'll be easy..But it 'll takes big time(using local DB it takes up to 9 sec.) Or each time i show new page(from the pagination) i make a new request to server and then new request to DB to get the next records..But then what if the client click on Prev button, i 'll make a new request to get data that I had previously..Should i cach data that are loaded before and how if that's good technique? So load all data once or make a new request every time i need data that maybe have been loaded before. I'm using ASP.NET MVC SPA with durandaljs and knockoutjs

    Read the article

  • Should I keep separate client codebases and databases for a software-as-a-service application?

    - by John
    My question is about the architecture of my application. I have a Rails application where companies can administrate all things related to their clients. Companies would buy a subscription and their users can access the application online. Hopefully I will get multiple companies subscribing to my application/service. What should I do with my code and database? Seperate app code base and database per company One app code base but seperate database per company One app code base and one database The decision involves security (e.g. a user from company X should not see any data from company Y) performance (let's suppose it becomes successful, it should have a good performance) and scalability (again, if successful, it should have a good performance but also easy for me to handle all the companies, code changes, etc). For the sake of maintainability, I tend to opt for the one code base, but for the database I really don't know. What do you think is the best option?

    Read the article

  • How do you balance between "do it right" and "do it ASAP" in your daily work?

    - by Flot2011
    I find myself pondering this question times and times again. I want to do things the right way, to write a clean, understandable, correct code that is easy to maintain, but what I really do pretty often is writing a patch upon patch just because there is no time, clients are waiting, a bug should be fixed overnight, the company is losing money on this problem, a manager is pressing hard etc. etc. I know perfectly well that in a long shot I am wasting much more time on these patches, but as this time is spread over months of work, nobody cares. Also, as one of my managers used to say, we don't know if there will be this long shot if we will not fix it now. I am sure I am not the only one entrapped in this endless real/ideal choices. So how you, fellow programmers, are coping with this?

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >