Search Results

Search found 29638 results on 1186 pages for 'phone number'.

Page 744/1186 | < Previous Page | 740 741 742 743 744 745 746 747 748 749 750 751  | Next Page >

  • When decomposing a large function, how can I avoid the complexity from the extra subfunctions?

    - by missingno
    Say I have a large function like the following: function do_lots_of_stuff(){ { //subpart 1 ... } ... { //subpart N ... } } a common pattern is to decompose it into subfunctions function do_lots_of_stuff(){ subpart_1(...) subpart_2(...) ... subpart_N(...) } I usually find that decomposition has two main advantages: The decomposed function becomes much smaller. This can help people read it without getting lost in the details. Parameters have to be explicitly passed to the underlying subfunctions, instead of being implicitly available by just being in scope. This can help readability and modularity in some situations. However, I also find that decomposition has some disadvantages: There are no guarantees that the subfunctions "belong" to do_lots_of_stuff so there is nothing stopping someone from accidentally calling them from a wrong place. A module's complexity grows quadratically with the number of functions we add to it. (There are more possible ways for things to call each other) Therefore: Are there useful convention or coding styles that help me balance the pros and cons of function decomposition or should I just use an editor with code folding and call it a day? EDIT: This problem also applies to functional code (although in a less pressing manner). For example, in a functional setting we would have the subparts be returning values that are combined in the end and the decomposition problem of having lots of subfunctions being able to use each other is still present. We can't always assume that the problem domain will be able to be modeled on just some small simple types with just a few highly orthogonal functions. There will always be complicated algorithms or long lists of business rules that we still want to correctly be able to deal with. function do_lots_of_stuff(){ p1 = subpart_1() p2 = subpart_2() pN = subpart_N() return assembleStuff(p1, p2, ..., pN) }

    Read the article

  • Is it possible to route *.example.com to a single machine without registering extra domains?

    - by oligofren
    I would like to achieve something similar to what wordpress.com does - giving each user its own subdomain. user1.wordpress.com would in the VirtualHosts setup of Apache would have its DocRoot at /user/user1, for instance. Now, our hosting service provider takes a fee for creating a domain, and in our case this would mean a ridiculous number of domains with a matching price. After some googling on DNS I came over a description of a DNAME record. That seems to fit the bill precisely. Any reason why my service provider would not do this, or why I should not do this?

    Read the article

  • Solutions for iOS collaborative sync (iCloud CoreData, CouchDB)?

    - by mluisbrown
    I'm developing an iOS app where one of the features will be allowing users to share and collaborate on data (e.g. lists). From everything I've read and based on the way that iCloud CoreData sync works I assume that it would not be a good fit for the following reasons, but I wanted to make sure I wasn't missing anything, as I'd prefer not to use a 3rd party syncing solution if at all possible: iCloud sync of any kind (CoreData, Document or Key / Value pairs) can only ever be between devices that use the same iCloud account, so it's designed for a single user syncing data over multiple devices. Any kind of collaborative sync (several people editing the same document / list) simultaneously would be limited to everyone have the same iCloud account. Cases of people sharing the same iCloud account is usually limited to, for example, husband and wife or similar close relationships for a small number of people. iCloud Core Data sync is for ensuring that each sync'd device has the same data. It doesn't seem to allow syncing just a subset of the data, so scenarios in which each user has their own documents and is only sharing / collaborating on a subset of them are not supported. And I'm not even mentioning the well document problems with iCloud CoreData syncing which may or may not have been resolved with iOS 7. Given the above, it would seem that CouchDB (with TouchDB) would be a better option, as it seems to support everything I need. What other options are there that people can recommend?

    Read the article

  • Deploying InfoPath forms &ndash; idiosyncrasies

    - by PointsToShare
    Well, I have written a sophisticated PowerShell script to expedite the deployment of InfoPath forms - .XSN file.  Along the way by way of trial and error (mostly error and error), I discovered a few little things. Here they are. •    Regardless of how the install command is run – PowerShell or the GUI in Central Admin – SharePoint enwraps the XSN inside a solution – WSP, then installs and deploys the solution. •    The solution is named by concatenating “form-“ with the first 16 characters (or less if the file name is shorter than 16) of the file name and the required WSP at the end. So if the form name was MyInfopathForm.xsn the solution name will be form-MyInfopathForm.wsp, but for WithdrawalOfRequestsForRefund.xsn it will be named form-WithdrawalOfRequ.wsp •    It only gets worse! Had there already been a solution file with the same name, Microsoft appends a three digit number to the name, like MyInfopathForm-123.wsp. Remember a digit is a finger, I suspect a middle finger, so when you deploy the same form – many versions of it, or as it was in my case – testing a script time and again, you’ll end up with many such digit (middle finger) appended solutions, all un-deployed except the last one. This is not a bug. It’s a feature!   Well, there are ways around it. When by hand, remove the solution from the solution store before deploying the form again. In the script I do the same thing. And finally - an important caveat; Make sure that all your form names are unique in the first 16 characters. If you also have a form with the name forWithdrawalOfRequestForRelief.xsn, you’re in trouble! That’s all folks!

    Read the article

  • No input file specified with nginx

    - by user66700
    I'm getting "No input file specified." when I attempt to browse to the phpmyadmin domain, not sure what I'm doing wrong.. using both php-fpm and php-cgi, php-fpm is currently working another directory fine..Had to change the port number to 8888 since -fpm was already using 9000 http://pastebin.com/kdEckiL3 from nginx.conf: server { listen 80; server_name phpmyadmin.domain.com; access_log /home/fanboy/logs/phpmyadmin.access_log; error_log /home/fanboy/logs/phpmyadmin.error_log; location / { root /usr/share/phpmyadmin; index index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:8888; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; include /usr/local/nginx/conf/fastcgi.conf; } }

    Read the article

  • Latest XP Drivers for Intel Integrated Graphics

    - by John
    On the Intel site I'm struggling to find what the latest driver version is for certain chipsets. From a system info tool I have several PCs (laptops) with the following reported graphics: Mobile Intel(R) 4 Series Express Chipset Family Intel(R) Q35 Express Chipset Family Intel(R) Q45/Q43 Express Chipset All use the same driver igxprd32.dll at version 6.14.0010.4xxx (last 3 digits vary). All PCs are XP 32 bit. I am not certain but I think these are all using the same basic chipset. More than likely the drivers were never updated so I wondered what version might be relevant. Any help on tracking down the latest driver version (I just need the number, so I can see how out of date they are) and figuring out which chips these cards are would be great.

    Read the article

  • Empty Recycle Bin error "Cannot Delete Dc12: Access denied."

    - by Chris Noe
    The Dc number can vary. The error is a sporadic, but when it happens it prevents the contents of the recycle bin from being deleted. It can also occur when the recycle bin appears to be empty, yet it has the crumpled paper indicator. Rebooting makes the problem go away, but it can also magically go away by just waiting a long time, like over night. But the problem keeps recurring with no rhyme or reason. What is causing this? I really don't want to reinstall Windows.

    Read the article

  • Unable to delete files in Temporary Internet Files folder

    - by Johnny
    I'm on Win7. I have a large number of of large .bin files, totaling 183GB, in my Temporary Internet Files folder. They all seem to come from video sharing sites like youtube. The files are invisible in Explorer even after allowing viewing of hidden files. The only way I can see them is by issuing "dir /fs" on the command line. Now when I try to delete them from the command line nothing happens. Trying to delete the whole folder from Explorer results in access denied because another process is using a file in the folder (IE is not running while I'm doing this). Trying to clear the folder using IE is also unsuccessful. How do I delete these files? How did they end up being there without being deleted by IE?

    Read the article

  • Boundary conditions for testing

    - by Loggie
    Ok so in a programming test I was given the following question. Question 1 (1 mark) Spot the potential bug in this section of code: void Class::Update( float dt ) { totalTime += dt; if( totalTime == 3.0f ) { // Do state change m_State++; } } The multiple choice answers for this question were. a) It has a constant floating point number where it should have a named constant variable b) It may not change state with only an equality test c) You don't know what state you are changing to d) The class is named poorly I wrongly answered this with answer C. I eventually received feedback on the answers and the feedback for this question was Correct answer is a. This is about understanding correct boundary conditions for tests. The other answers are arguably valid points, but do not indicate a potential bug in the code. My question here is, what does this have to do with boundary conditions? My understanding of boundary conditions is checking that a value is within a certain range, which isn't the case here. Upon looking over the question, in my opinion, B should be the correct answer when considering the accuracy issues of using floating point values.

    Read the article

  • Can't use HTTPS with ServerXMLHTTP object

    - by Imraan
    I am supporting a Classic ASP application that connects to a payment gateway via HTTPS. Up until recently there have been no issues. A few days ago this broke without the code, IIS config or anything local changing. Its broken on at least 3 separate servers. The last run of Windows Updates was in late November, but bringing the servers' updates up date has not resolved the problem. A code snippet is below. Dim oHttp Dim strResult Set oHttp = CreateObject("MSXML2.ServerXMLHTTP") oHttp.setOption 2, 13056 oHttp.open "POST", SOAP_ENDPOINT, false oHttp.setRequestHeader "Content-Type", "application/soap+xml; charset=utf-8" oHttp.setRequestHeader "SOAPAction", SOAP_NS + "/" & SOAP_FUNCTION oHttp.send SOAP_REQUEST Below is a dump of the error object :- Number: -2147012852 Description: A certificate is required to complete client authentication Message: A certificate is required to complete client authentication I initially posted the question on Stackoverflow (http://stackoverflow.com/questions/9212985/cant-use-https-with-serverxmlhttp-object) thinking it was a code issue, but further investigation seems to point to a server issue.

    Read the article

  • Why isn't one of the constant buffers being loaded inside the shader?

    - by Paul Ske
    I however got the model to load under tessellation; only problem is that one of the constant buffers aren't actually updating the shader's tessellation factor inside the hullshader. I created a messagebox at the rendering point so I know for sure the tessellation factor is assigned to the dynamic constant buffer. Inside the shader code where it says .Edges[1] = tessellationAmount; the tessellationAmount is suppose to be sent from the dynamic buffer to the shader. Otherwise it's just a plain box. In better explanation; there's a matrixBuffer, cameraBuffer, TessellationBuffer for constant. There's a multiBuffer array that assigns the matrix, camera, tesselation. So, when I set the Hull Shader, PixelShader, VertexShader, DomainShader it gets assigned by the multibuffer. E.G. devcon-HSSetConstantBuffers(0,3,multibuffer); The only way around the whole ideal would be to go in the shader and change how much the edges tessellate and inside the edges as well with the same number. My question is why wouldn't the tessellationBuffer not work in the shader?

    Read the article

  • Does Dreamweaver subversion support branching?

    - by John Isaacks
    Adobe Dreamweaver added support for subversion in CS4. I have CS5, I am able to update and commit, they even have a very easy rollback option where you can promote any revision number to "head" but I cannot figure out anyway to branch using it. According to Adobe their subversion support is not full featured. But I cannot find any resource that stats what all is supported. So is branching one of the things not supported? (I kind of feel like whats the point without branching?) If you can do it, how?

    Read the article

  • Xen find VBD id for physical disks

    - by Joe
    I'm starting a xen domU using xm create config.cfg. Within the config file are a number of physical block devices (LVs) which are added to the guest and can be accessed fine when it boots. However, at a point in the future I need to be able to hot unplug one of these disks using the xm block-detach command. This command, however, requires the vbd id of the device to be detached and I can't find a way to find the device id for a particular disk 'plugged in' at start up. Any help is much appreciated!

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • Algorithms for Data Redundancy and Failover for distributed storage system?

    - by kennetham
    I'm building a distributed storage system that works with different storage sizes. For instance, my storage devices have sizes of 50GB, 70GB, 150GB, 250GB, 1000GB, 5 storage systems in one system. My application will store any files to the storage system. Question: How can I build a distributed storage with the idea of data redundancy and fail-over to store documents, videos, any type of files at the same time ensuring that should one of any storage devices fail, there would be another copy of these files on another storage device. However, the concern is, 50GB of storage can only store this maximum number of files as compared to 70GB, 150GB etc. With one storage in mind, bringing 5 storage systems like a cloud storage, is there any logical way to distribute or store the files through my application? How do I ensure data redundancy through different storage sizes? Is there any algorithm to collate multiple blob files into a single file archive? What is the best solution for one cloud storage with multiple different storage sizes? I open this topic with the objective of discussing the best way to implement this idea, assuming simplicity, what are the issues of this implementation, performance measurements and discussion of the limitations.

    Read the article

  • MySQL 5.0 Unavailable but still working every monday morning

    - by user1578031
    So I have a MySQL server with 4-5 databases and every monday morning, I can't login with PHPmyAdmin or the cmd line tools or options and the funny thing is the connection can be made later in the day around 3:30 - 4PM. I can't replicate the issues on any dev boxes and cant upgrade to 5.5 for a number of reasons. I want to try and poll the sql databases with a simple script and just connect to it and then disconnect. I'm not very good with SQL but would like to know if anyone could help with a script which I can automate to run sunday every 30mins or so, to see if the issue can be stopped by the database being connected to over the very quiet period of sunday... After this im out of ideas and have to wait for the HW/SW refresh to get 5.5 on there...

    Read the article

  • Looking for simple windows scan (multiple pages) to one pdf application?

    - by Troggy
    I would like to find some simple scan software for a windows machine that can scan to pdf, but I would like it to do batch or multiple pages into one big pdf. I saw a couple questions on scan to pdf software, but did not see anything talking about scanning to large multiple page pdf's. EDIT: I am surprised there are not more options out there. Do many of the scanners/all in one devices come with included software that perform this function? EDIT 2: I tried Scan2PDF and it locked up on me multiple times in the middle of the scan job and then gave me non-english error messages. Otherwise, I liked how simple the app was, just select number of pages and hit ok. Any other success stories out there?

    Read the article

  • Synckolab will not start automatically

    - by EBV2010
    We have a Kolab server for e-mail, calendar and contacts with Thunderbird for a client. The add-ons are Lightning and Synckolab. The workstations are Kubuntu, most 10.04, some 11.10. It basically works but for one nagging problem: the automatic sync (that is, the settings that starts Synckolab upon start of Thunderbird and every x minutes as per setting) does not fire. We went through the whole routine: setting it, setting it to zero and back to any number of minutes, stopping/starting Thunderbird or the entire computer to make sure it sees and sets it. In the configuration console is reflects the changes. But still it will not automatically fire Synckolab. Manual syncs work without any problem (none that we've seen - it reflects all the added, changed etc calendar events). In short: Synckolab does not fire automatically with any setting who have thought of.

    Read the article

  • better options for screen?

    - by lonestar21
    OK. So I love screen. It has saved my bacon a few times when machines crash or get disconnected from the network. However, there are enough reasons keep keep me from using screen for everything, which include: Pain in the butt scrolling. Why can't I just interact as though this is a normal bash shell? My keyboard shortcuts are gone. I have a number of things customized in my bash environment, is there a way to get them to work in screen as well? Are there any tools our tips that I can use to make my screen-using experience as high quality as my bash using experience?

    Read the article

  • Project Showcase: SaaS Web Apps Hits a Home Run with New SCMS Database

    - by Webgui
    We love seeing projects from start to finish, and we’re happy to share the latest example with you. Who: SaaS Web Apps – they use Software as a Service to create web applications that look and feel like desktop applications. What: SaaS Web Apps needed to build a Sports Contract Management System (SCMS) for one of its customers, Premier Stinson Sports. Why: The SCMS database is used for collecting, analyzing and recording college coach and athletic directors’ employment and contract data. The Challenge: Premier Stinson Sports works with a number of partners, each with its own needs and unique requirements. For example, USA Today uses the system to provide cutting edge news analysis while The National Sports Law Institute of Marquette University Law School uses it to for the latest sports contract data and student analysis. In addition, the system needed to be secure due to the sensitivity of the data; it was essential that the user security and permissions be easily configurable. As always, performance was a key factor, especially with the intense reporting and analytical capabilities for this project. Because of this, most of the processing had to be done on a dedicated server but the project called for the richness and responsiveness of a desktop application. The Solution: To execute the project, SaaS Web Apps used APS.Net-based Visual WebGui from Gizmox, combined with SQL Server 2008 and SQL Reporting Services. This combination resulted in a quick deployment for SaaS Web Apps’ customers. The Result: The completed project gave each partner the scalability and availability of a web application with the performance and security of a desktop application. As an example, USA Today pulls data from this database to give readers the latest sports stats – Salary analysis of 2010 Football Bowl Subdivision Coaches. And here’s a screenshot of the database itself. Great work, SaaS Web Apps!

    Read the article

  • How to filter Varnish logs based on XID?

    - by Martijn Heemels
    I'm running into infrequent 503 errors which appear hard to pinpoint. Varnishlog is driving me mad, since I can't seem to get the information I want out of it. I'd like to see both the client- and backend-communications as seen by Varnish. I thought the XID number, which is logged on Varnish's default error page, would allow me to filter the exact request out of the logging buffer. However, no combination of varnishlog parameters gives me the output I need. The following only shows the client-side communication: varnishlog -d -c -m ReqStart:1427305652 while this only shows the resulting backend communication: varnishlog -d -b -m TxHeader:1427305652 Is there a one-liner to show the entire request?

    Read the article

  • Exclude pings from apache error logs (ran from PHP exec)

    - by fooraide
    Now, for a number of reasons I need to ping several hosts on a regular basis for a dashboard display. I use this PHP function to do it: function PingHost($strIpAddr) { exec(escapeshellcmd('ping -q -W 1 -c 1 '.$strIpAddr), $dataresult, $returnvar); if (substr($dataresult[4],0,3) == "rtt") { //We got a ping result, lets parse it. $arr = explode("/",$dataresult[4]); return ereg_replace(" ms","",$arr[4]); } elseif (substr($dataresult[3],35,16) == "100% packet loss") { //Host is down! return "Down"; } elseif ($returnvar == "2") { return "No DNS"; } } The problem is that whenever there is an unknown host, I will get an error logged to my apache error log (/var/log/apache/error.log). How would I go about disabling logs for this particular function ? Disabling logs in the vhost is not an option since logs for that vhost are relevant, just not the pings. Thanks,

    Read the article

  • Unable to make sound play in headset

    - by user50849
    Top right, I click the sound icon, select sound settings, and connect my USB-headset. I can them see the headset being detected as it pops up in the menu. I click it, and expect the currently played audio to get sent to the headset instead. My problem is that it does not. The audio keeps playing through the built-in speakers. More info: The icon for my built-in card in the sound settings is a circuit with a note symbol on top. The symbol for the headset is just black background with a "No" symbol on it. Might mean it doesn't work somehow. I installed pavucontrol, and notice that no second sound card shows up in there. When connecting, the syslog says Jun 20 09:38:46 yuna kernel: [40144.553431] usb 2-1.2: new full-speed USB device number 11 using ehci_hcd Jun 20 09:38:46 yuna kernel: [40144.650609] input: C-Media USB Headphone Set as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.3/input/input20 Jun 20 09:38:46 yuna kernel: [40144.650895] generic-usb 0003:0D8C:000C.000B: input,hidraw0: USB HID v1.00 Device [C-Media USB Headphone Set ] on usb-0000:00:1d.0-1.2/input3 Jun 20 09:38:46 yuna mtp-probe: checking bus 2, device 11: "/sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2" Jun 20 09:38:46 yuna mtp-probe: bus: 2, device: 11 was not an MTP device

    Read the article

  • TechEd 2012: Fast SQL Server

    - by Tim Murphy
    While I spend a certain amount of my time creating databases (coding around SQL Server and setup a server when I have to) it isn’t my bread and butter.  Since I have run into a number of time that SQL Server needed to be tuned I figured I would step out of my comfort zone and see what I can learn. Brent Ozar packed a mountain of information into his session on making SQL Server faster.  I’m not sure how he found time to hit all of his points since he was allowing the audience abuse him on Twitter instead of asking questions, but he managed it.  I also questioned his sanity since he appeared to be using a fruit laptop. He had my attention though when he stated that he had given up on telling people to not use “select *”. He posited that it could be fixed with hardware by caching the data in memory.  He continued by cautioning that having too many indexes could defeat this approach.  His logic was sound if not always practical, but it was a good place to start when determining the trade-offs you need to balance.  He was moving pretty fast, but I believe he was prescribing this solution predominately for OLTP database prior to moving on to data warehouse solutions. Much of the advice he gave for data warehouses is contained in the Microsoft Fast Track guidance so I won’t rehash it here.  To summarize the solution seems to be the proper balance memory, disk access speed and the speed of the pipes that get the data from storage to the CPU.  It appears to be sound guidance and the session gave enough information that going forward we should be able to find the details needed easily.  Just what the doctor ordered. del.icio.us Tags: SQL Server,TechEd,TechEd 2012,Database,Performance Tuning

    Read the article

  • What is the fastest RAID in practise?

    - by Luke
    I'm going to be rebuilding my server, and I want much faster access to my data. I've used RAID 1 and 0 in the past, and decided upon RAID 10 (dedicated RAID card). Then someone told me to use RAID 5+0, then someone else told me to use RAID 6+0. Assuming the Hardware RAID Card supports each level, what is currently the FASTEST RAID available, given x number of hard drives? Reliability is now another factor, and I am willing to spend money on new drives if a drive (or multiple) fail. I simply want to know what the fastest RAID level is, along with some reliability for recovering from a failure

    Read the article

< Previous Page | 740 741 742 743 744 745 746 747 748 749 750 751  | Next Page >