Search Results

Search found 29133 results on 1166 pages for 'week number'.

Page 653/1166 | < Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >

  • TiVo Follow-up&hellip;Training Opportunities

    - by MightyZot
    A few posts ago I talked about my experience with TiVo Customer Service. While I didn’t receive bad service per se, I felt like the reps could have communicated better. I made the argument that it should be just as easy to leave a company as it is to engage with a company, even though my intention is to remain a TiVo fan. I worked for DataStorm Technologies in the early 90s. I pointed out to another developer that we were leaving files behind in our installations. My opinion was that, if the customer is uninstalling our application, there should be no trace of it left after uninstall except for the customer’s data. He replied with, “screw ‘em. They’re leaving us. Why do we care if we left anything behind?” Wow. Surely there is a lot of arrogance in that statement. Think about this…how often do you change your services, devices, or whatever?  Personally, I change things up about once every two or three years. If I don’t change things up, I at least think about it. So, every two or three years there is an opportunity for you (as a vendor or business) to sell me something. (That opportunity actually exists all the time, because there are many of these two or three year periods overlapping.) Likewise, you have the opportunity to win back my business every two or three years as well. Customer service on exit is just as important as customer service during engagement because, every so often, you have another chance to gain back my loyalty. If you screw that up on exit, your chances are close to zero. In addition, you need to consider all of the potential or existing customers that are part of or affected by my social organizations. “Melissa” at TiVo gave me a call last week and set up some time to talk about my experience. We talked yesterday and she gave me a few moments to pontificate about my thoughts on the importance of a complete customer experience. She had listened to my customer support calls and agreed that I had made it clear that I intended to remain a TiVo customer even though suddenLink is handling my subscription. She said that suddenLink is a very important partner for them and, of course, they want to do everything they can to support TiVo / suddenLink customers.  “Melissa” also said that they had turned this experience into a training opportunity for the reps involved. I hope that is true, because that “programmer arrogance” that I mentioned above (which was somewhat pervasive back then) may be part of the reason why that company is no longer around. Good job “Melissa”!  And, like I said, I am still a TiVo fan. In fact, we love our new TiVo and many of the great new features. In addition, if you’re one of the two people that read these posts, please remember that these are just opinions. Your experiences may be, and likely will be, completely unique to you.

    Read the article

  • Linux, fdisk: change order of partitions

    - by osgx
    I have a harddrive with 24 logical partitions on it. Half of them are Linux and half are windows. Current ordering is: 3 Linux partitions; 12 windows partitions; 9 Linux partitions. In this setup, Windows can access any of partition (no limits on partition number), but Linux can't access sda16, sda17 ... Can I change numbering of partitions without moving them on disk? I want to put all Linux partitions to be <16; and windows partitions to be 16, so linux will be able to access all linux partitions. I have fdisk/sfdisk and it sees all partitions.

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • IIS 7 503 error, application pool stop crash, defdoc.dll could not be loaded due to a configuration

    - by optician
    Hi All, Currently trying to get iis 7 to work, but every time I request a page, the application pool goes into stopped status. In the event log this is what comes back. The Module DLL 'C:\Windows\System32\inetsrv\defdoc.dll' could not be loaded due to a configuration problem. The current configuration only supports loading images built for a x86 processor architecture. The data field contains the error number. I've already re installed iis, any other ideas, I read that someone fixed this by downloading the dll again, but this seems like an odd solution. Thanks. EDIT I have now replaced the file with one I downloaded off the internet, and now it says The Module DLL 'C:\Windows\System32\inetsrv\protsup.dll' could not be loaded due to a configuration problem. I hope I don't have to get 100's of these.

    Read the article

  • Logparser and Powershell

    - by Michel Klomp
    Logparser in powershell One of the few examples how to use logparser in powershell is from the Microsoft.com Operations blog. This script is a good base to create more advanced logparser scripts: $myQuery = new-object -com MSUtil.LogQuery $szQuery = “Select top 10 * from r:\ex07011210.log”; $recordSet = $myQuery.Execute($szQuery) for(; !$recordSet.atEnd(); $recordSet.moveNext()) {             $record=$recordSet.getRecord();             write-host ($record.GetValue(0) + “,”+ $record.GetValue(1)); } $recordSet.Close(); Logparser input formats The previous example uses the default logparser object, you can extent this with the logparser input formats. with this formats get information from the event-log, different types of logfiles, the Active Directory, the registry and XML files. Here are the different ProgId’s you can use. Input Format ProgId ADS MSUtil.LogQuery.ADSInputFormat BIN MSUtil.LogQuery.IISBINInputFormat CSV MSUtil.LogQuery.CSVInputFormat ETW MSUtil.LogQuery.ETWInputFormat EVT MSUtil.LogQuery.EventLogInputFormat FS MSUtil.LogQuery.FileSystemInputFormat HTTPERR MSUtil.LogQuery.HttpErrorInputFormat IIS MSUtil.LogQuery.IISIISInputFormat IISODBC MSUtil.LogQuery.IISODBCInputFormat IISW3C MSUtil.LogQuery.IISW3CInputFormat NCSA MSUtil.LogQuery.IISNCSAInputFormat NETMON MSUtil.LogQuery.NetMonInputFormat REG MSUtil.LogQuery.RegistryInputFormat TEXTLINE MSUtil.LogQuery.TextLineInputFormat TEXTWORD MSUtil.LogQuery.TextWordInputFormat TSV MSUtil.LogQuery.TSVInputFormat URLSCAN MSUtil.LogQuery.URLScanLogInputFormat W3C MSUtil.LogQuery.W3CInputFormat XML MSUtil.LogQuery.XMLInputFormat Using logparser to parse IIS logs if you use the IISW3CinputFormat you can use the field names instead of de row number to get the information from an IIS logfile, it also skips the comment rows in the logfile. $ObjLogparser = new-object -com MSUtil.LogQuery $objInputFormat = new-object -com MSUtil.LogQuery.IISW3CInputFormat $Query = “Select top 10 * from c:\temp\hb\ex071002.log”; $recordSet = $ObjLogparser.Execute($Query, $objInputFormat) for(; !$recordSet.atEnd(); $recordSet.moveNext()) {     $record=$recordSet.getRecord();     write-host ($record.GetValue(“s-ip”) + “,”+ $record.GetValue(“cs-uri-query”)); } $recordSet.Close();

    Read the article

  • Two questions about restoring Thunderbird from a backup

    - by Eric
    Setting up a new Windows 7 PC, I'm puzzled by two things in Thunderbird 3.1.9: I restored a profile from a three-month old backup, no problem. I then copied more recent files into the Mail/ directory, but TBird still shows the old messages. The last message in Inbox is dated 3/16/2011 -- how do I get TBird to display all the messages in the Local Folders/Inbox view? A large number of the existing messages are now displayed in separate tabs -- I can't tell you how many, but there could be over 1000. Which file governs this? Or can I hire someone from Mechanical Turk to come over and manually close each tab?

    Read the article

  • preparing to migrate MOSS 2007 SP1 content into new server with MOSS 2007 SP2 with the same server n

    - by Albert Widjaja
    To All Experts here, I'd like to perform a MOSS server content migration from the existing single instance server (SQL Server 2008, Windows Server 2008, SharePoint 2007 SP1) all in one and by maintaining the server name only, here's what I had in my migration plan document outline: I have created the new Windows Server 2008 R2 + SQL Server 2008 SP1 and MOSS 2007 SP2 after that: Backup MOSS_ContentDB from SQL Server 2008 SSMS (does this enough to cover all top level sites and its content in the library and doc. repository ?) Backup all top level sites from CA site using the CA Sites backup and restore tools. turn off the old MOSSDEV01 (current server) rename the existing server (TempServ01) into MOSSDEV01 (so that the user bookmark and other link inside MOSS site still working) perform restore of the DB restore the site collection and its content from the backup is that the correct way to do it ? I haven't execute the server configuration wizard to create the same port number of the SSP, CA and the SQL Server DB yet. Any idea and suggestion would be greatly appreciated Thanks.

    Read the article

  • SQL SERVER – Solution of Puzzle – Swap Value of Column Without Case Statement

    - by pinaldave
    Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. I will be not able to cover every single solution which is posted as a comment, however, I would like to for sure cover few interesting entries. However, I am selecting 5 solutions which are different (not necessary they are most optimal or best – just different and interesting). Just for clarity I am involving the original problem statement here. USE tempdb GO CREATE TABLE SimpleTable (ID INT, Gender VARCHAR(10)) GO INSERT INTO SimpleTable (ID, Gender) SELECT 1, 'female' UNION ALL SELECT 2, 'male' UNION ALL SELECT 3, 'male' GO SELECT * FROM SimpleTable GO -- Insert Your Solutions here -- Swap value of Column Gender SELECT * FROM SimpleTable GO DROP TABLE SimpleTable GO Here are the five most interesting and different solutions I have received. Solution by Roji P Thomas UPDATE S SET S.Gender = D.Gender FROM SimpleTable S INNER JOIN SimpleTable D ON S.Gender != D.Gender I really loved the solutions as it is very simple and drives the point home – elegant and will work pretty much for any values (not necessarily restricted by the option in original question ‘male’ or ‘female’). Solution by Aneel CREATE TABLE #temp(id INT, datacolumn CHAR(4)) INSERT INTO #temp VALUES(1,'gent'),(2,'lady'),(3,'lady') DECLARE @value1 CHAR(4), @value2 CHAR(4) SET @value1 = 'lady' SET @value2 = 'gent' UPDATE #temp SET datacolumn = REPLACE(@value1 + @value2,datacolumn,'') Aneel has very interesting solution where he combined both the values and replace the original value. I personally liked this creativity of the solution. Solution by SIJIN KUMAR V P UPDATE SimpleTable SET Gender = RIGHT(('fe'+Gender), DIFFERENCE((Gender),SOUNDEX(Gender))*2) Sijin has amazed me with Difference and Soundex function. I have never visualized that above two functions can resolve the problem. Hats off to you Sijin. Solution by Nikhildas UPDATE St SET St.Gender = t.Gender FROM SimpleTable St CROSS Apply (SELECT DISTINCT gender FROM SimpleTable WHERE St.Gender != Gender) t I was expecting that someone will come up with this solution where they use CROSS APPLY. This is indeed very neat and for sure interesting exercise. If you do not know how CROSS APPLY works this is the time to learn. Solution by mistermagooo UPDATE SimpleTable SET Gender=X.NewGender FROM (VALUES('male','female'),('female','male')) AS X(OldGender,NewGender) WHERE SimpleTable.Gender=X.OldGender As per author this is a slow solution but I love how syntaxes are placed and used here. I love how he used syntax here. I will say this is the most beautifully written solution (not necessarily it is best). Bonus: Solution by Madhivanan Somehow I was confident Madhi – SQL Server MVP will come up with something which I will be compelled to read. He has written a complete blog post on this subject and I encourage all of you to go ahead and read it. Now personally I wanted to list every single comment here. There are some so good that I am just amazed with the creativity. I will write a part of this blog post in future. However, here is the challenge for you. Challenge: Go over 50+ various solutions listed to the simple problem here. Here are my two asks for you. 1) Pick your best solution and list here in the comment. This exercise will for sure teach us one or two things. 2) Write your own solution which is yet not covered already listed 50 solutions. I am confident that there is no end to creativity. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • OpenVPN vs. IPSec - Pros and Cons, what to use?

    - by jens
    interestingly I have not found any good searchresults when searching for "OpenVPN vs IPSec": I need to setup a private LAN over an untrusted network. And as far as I know, both approaces seem to be valid. But I do not know which one is better. I would be very thankfull If you can list the pro's and con's of both approaches and maybe your suggestions and experiences what to use. Update (Regarding the comment/question): In my concrete case the goal is to have any number of Servers (with static IPs) be connected transparently with each other. But a small portion of "dynamic clients like road warriors" (with dynamic IPs) should also be able to connect. The main goal is however having a "transparent secure network" run top of untrusted network. I am quite a newbie so I do not know how to correctly interprete "1:1 Point to Point Connections" = The solution should support Broadcasts and all that stuff so it is a fully functional network... Thank you very much!! Jens

    Read the article

  • Windows XP/7: custom routing for VPN connection

    - by Peter Becker
    We are dealing with a badly configured VPN connection from a vendor, which set up the default gateway but doesn't route traffic anywhere beyond their VPN zone. I managed to do some ad-hoc routing to configure a computer in a way that it can reach the vendor's VPN, our local network as well as the internet. I then tried to turn this into a script, but that failed since the interface number of the VPN changes on every connection. Is there a way in Windows XP and/or Windows 7 to configure custom routing on the client side of a VPN connection? What I would like to do is to have a script running just after the connection comes up that changes the routing table (similar to an ifup script on UNIX).

    Read the article

  • Forwarding port to a VM - How to?

    - by Peter Gadd
    I use Win 8 Ent x64 on my PC, and I also have a Win 7 VMware virtual machine set up using a bridged network adapter. The IPv4 number for the Win 7 VM is 192.168.1.115. I require access to the VM from the Internet through port 1688. How do I set up port forwarding to achieve this? My router is a Cisco Linksys WAG120N. ========= If you require any further information to help me with this, I will gladly supply it. ========= Thanks in advance.

    Read the article

  • Postgres 9.0 locking up, 100% CPU

    - by Jake
    We are having a problem where our Postgres 9.0 server occasionally locks up and kills our webapp. Restarting Postgres fixes the problem. Here's what I've been able to observe: First, usage of one CPU jumps to 100% for a few minutes Disk operations drop to ~0 during this time Database operations drop to 0 (blocks and tuples per sec) Logs show during this time: WARNING: worker took too long to start; cancelled WARNING: worker took too long to start; cancelled No Queries in logs (only those over 200ms are logged) No unusually long-running queries logged before or during Then the second CPU jumps to 100% The number of postgres processes jumps from the usual 8-10 to ~20 Matched by a spike in Postgres Blocks per second (about twice normal) Logs show LOG: could not accept SSL connection: EOF detected Queries are running but slow Restarting postgres returns everything to normal Setup: Server: Amazon EC2 Large Ubuntu 10.04.2 LTS Postgres 9.0.3 Dedicated DB server Does anyone have any idea what's causing this? Or any suggestions about what else I should be checking out?

    Read the article

  • ODI 11g – How to override SQL at runtime?

    - by David Allan
    Following on from the posting some time back entitled ‘ODI 11g – Simple, Powerful, Flexible’ here we push the envelope even further. Rather than just having the SQL we override defined statically in the interface design we will have it configurable via a variable….at runtime. Imagine you have a well defined interface shape that you want to be fulfilled and that shape can be satisfied from a number of different sources that is what this allows - or the ability for one interface to consume data from many different places using variables. The cool thing about ODI’s reference API and this is that it can be fantastically flexible and useful. When I use the variable as the option value, and I execute the top level scenario that uses this temporary interface I get prompted (or can get prompted to be correct) for the value of the variable. Note I am using the <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@> notation for the table reference, since this is done at runtime, then the context will resolve to the correct table name etc. Each time I execute, I could use a different source provider (obviously some dependencies on KMs/technologies here). For example, the following groovy snippet first executes and the query uses SCOTT model with EMP, the next time it is from BOB model and the datastore OTHERS. m=new Properties(); m.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@>"); s=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s, null, "GLOBAL", 5, null, true); m2=new Properties(); m2.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","OTHERS", "BOB","D")@>"); s2=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s2, null, "GLOBAL", 5, null, true); You’ll need a patch to 11.1.1.6 for this type of capability, thanks to my ole buddy Ron Gonzalez from the Enterprise Management group for help pushing the envelope!

    Read the article

  • Problem with SLATEC routine usage with gfortran

    - by user39461
    I am trying to compute the Bessel function of the second kind (Bessel_y) using the SLATEC's Amos library available on Netlib. Here is the SLATEC code I use. Below I have pasted my test program that calls SLATEC routine CBESY. PROGRAM BESSELTEST IMPLICIT NONE REAL:: FNU INTEGER, PARAMETER :: N = 2, KODE = 1 COMPLEX,ALLOCATABLE :: CWRK (:), CY (:) COMPLEX:: Z, ci INTEGER :: NZ, IERR ALLOCATE(CWRK(N), CY(N)) ci = cmplx (0.0, 1.0) FNU = 0.0e0 Z = CMPLX(0.3e0, 0.4e0) CALL CBESY(Z, FNU, KODE, N, CY, NZ, CWRK, IERR) WRITE(*,*) 'CY: ', CY WRITE(*,*) 'IERR: ', IERR STOP END PROGRAM And here is the output of the above program: CY: ( 5.78591091E-39, 5.80327020E-39) ( 0.0000000 , 0.0000000 ) IERR: 4 Ierr = 4 meaning there is some problem with the input itself. To be precise, the IERR = 4 means the following as per the header info in CBESY.f file: ! IERR=4, CABS(Z) OR FNU+N-1 TOO LARGE - NO COMPUTA- ! TION BECAUSE OF COMPLETE LOSSES OF SIGNIFI- ! CANCE BY ARGUMENT REDUCTION Clearly, CABS(Z) (which is 0.50) or FNU + N - 1 (which is 1.0) are not too large but still the routine CBESY throws the error message number 4 as above. The CY array should have following values for the argument given in above code: CY(1) = -0.4983 + 0.6700i CY(2) = -1.0149 + 0.9485i These values are computed using Matlab. I can't figure out what's the problem when I call CBESY from SLATEC library. Any clues? Much thanks for the suggestions/help. PS: if it is of any help, I used gfortran to compile, link and then create the SLATEC library file ( the .a file ) which I keep in the same directory as my test program above. shell command to execute above code: gfortran -c BesselTest.f95 gfortran -o a *.o libslatec.a a GD.

    Read the article

  • What's a good, affortable ruter that will not give me problems when downloading torrents?

    - by Lirik
    I found several routers on newegg and they're in the $50-60 range, but I'm not sure if they'll handle the number of connections that are created when downloading torrents (100-300 seeds and around 50 peers). My roommate watches netflix movies, my brother and I download torrents, so my NETGEAR router ends up choking on the traffic and I have to restart it quite frequently. I've already posted a couple of questions on the topic and I've come to the conclusion that I need a new router. What are some routers that I should consider (my budget is in the $50 range)?

    Read the article

  • Random Movement for multiple entities

    - by opiop65
    I have this code for a arraylist of entities. All the entities use the same random and so all of them move in the same direction. How can I change it so it generates a new random number for each entity? public void moveFemale() { for(int i = 0; i < 1000; i++){ random = rand.nextInt(99); } if (random >= 0 && random <= 25) { posX -= enemyWalkSpeed; // right } if (random >= 26 && random <= 50) { posX += enemyWalkSpeed; // left } if (random >= 51 && random <= 75) { posY -= enemyWalkSpeed; // up } if (random >= 76 && random <= 100) { posY += enemyWalkSpeed; // down } } Is this correct? public void moveFemale() { for (Female female: GameFrame.females){ female.lastChangedDirectionTime += elapsedTime; if (female.lastChangedDirectionTime >= CHANGE_DIRECTION_TIME) { female.lastChangedDirectionTime = 0; random = rand.nextInt(100); if (random >= 0 && random <= 25) { posX -= enemyWalkSpeed; // right } if (random >= 26 && random <= 50) { posX += enemyWalkSpeed; // left } if (random >= 51 && random <= 75) { posY -= enemyWalkSpeed; // up } if (random >= 76 && random <= 100) { posY += enemyWalkSpeed; // down } } } }

    Read the article

  • Best Practices for MVC Architecture

    - by Mystere Man
    There are a number of questions on StackOverflow regard MVC best practices, but most of those seem to revolve around things like using Dependancy Injection, or creating helper functions, or do's and don'ts of what to do in views and controllers. My question is more about how to architect an MVC application. For example, we are encouraged to use DI with the Repository pattern to decouple data access from the controller, however very little is said on HOW to do that specifically for MVC. Where would we place the Repository classes, for instance? They don't seem to be model related specifically, since the model should likewise be relatively decoupled from the actual data access technologies. A second question involves how to structure the layers or tiers. Most example applications (Nerd dinner, Music Store, etc..) all seem to use a single tier, 2 layer approach (not counting tests) that typically has controllers directly calling L2S or EF code. If I want to create a multi-tier/layer aplication what are some of the best practices there in regards to MVC? This question is one-part standard q-a, but another part best-practices, so it could go either here or programmers.se, I am marking it CW. If you feel it would be better suited to programmers.se then it can be migrated. EDIT: What happened to the Community Wiki option? It seems to be gone.

    Read the article

  • Ubuntu whois package and request limits

    - by Sam Hammamy
    I'm writing a django app with a form that accepts an IP and does a whois lookup on the discovered domain names. I've found the Ubuntu package whois which I plan to call from a python subprocess, and read the stdout into a StringIO, then parse for things like Registrar, Name Servers, etc. My question is, it seems that there are many paid whois services, which means that there must be a reason why people don't just use this Ubuntu package. I'm wondering if there's a request limit on the number of requests from a single IP to the package's whois server? I will probably be making 250 domain lookups per IP or maybe more. Also, I've found that some domains aren't searchable: qmul.ac.uk is searchable kat.ph is not searchable ahram.org.eg is not searchable Any particular reason for that?

    Read the article

  • SQL SERVER – Retrieve SQL Server Installation Date Time

    - by pinaldave
    I have been asked this question number of times and my answer always have been – search online and you will find the answer. Every single time when someone has followed my answer – they have found accurate answer in first few clicks. However increasingly this question getting very popular so I have decided to answer this question here. I usually prefer to create my own T-SQL script but in today’s case, I have taken the script from web. I have seen this script at so many places I do not know who is original creator so not sure who should get credit for the same. Question: How to retrieve SQL Server Installation date? Answer: Run following query and it will give you date of SQL Server Installation. SELECT create_date FROM sys.server_principals WHERE sid = 0x010100000000000512000000 Question: I have installed SQL Server Evaluation version how do I know what is the expiry date for it? Answer: SQL Server evaluation period is for 180 days. The expiration date is always 180 days from the initial installation. Following query will give an expiration date of evaluation version. -- Evaluation Version Expire Date SELECT create_date AS InstallationDate, DATEADD(DD, 180, create_date) AS 'Expiry Date' FROM sys.server_principals WHERE sid = 0x010100000000000512000000 GO I believe there is a way to do the same using registry but I have not explored it personally. Now as I said earlier there are many different blog posts on this subject. Let me list a few which I really enjoyed to read personally as they shared few more insights over this subject. Retrieving SQL Server 2012 Evaluation Period Expiry Date How to find the Installation Date for an Evaluation Edition of SQL Server Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How can I promote clean coding at my workplace?

    - by Michael
    I work with a lot of legacy Java and RPG code on an internal company application. As you might expect, a lot of the code is written in many different styles, and often is difficult to read because of poorly named variables, inconsistent formatting, and contradictory comments (if they're there at all). Also, a good amount of code is not robust. Many times code is pushed to production quickly by the more experienced programmers, while code by newer programmers is held back by "code reviews" that IMO are unsatisfactory. (They usually take the form of, "It works, must be ok," than a serious critique of the code.) We have a fair number of production issues, which I feel could be lessened by giving more thought to the original design and testing. I have been working for this company for about 4 months, and have been complimented on my coding style a couple of times. My manager is also a fan of cleaner coding than is the norm. Is it my place to try to push for better style and better defensive coding, or should I simply code in the best way I can, and hope that my example will help others see how cleaner, more robust code (as well as aggressive refactoring) will result in less debugging and change time?

    Read the article

  • spawn-fcgi/ fast CGi php crashes without traces in logs, on Gentoo

    - by user39046
    Hello, I recently moved from apache to a Nginx/fastcgi solution, I had it running on a Fedora system and had no problems, but, since i moved all to Gentoo , the Spawn-fCGI / fastcgi php daemon dies, and i can't find out any errors reports on /var/log/messages , so i don't know why this happens. I've seen that fastcgi is somehow different from the fedora distro, on gentoo as it has different conf files and init.d startup scripts, Can someone help me make it more stable? The number of requests that i had isn't any different from the ones I had on fedora, so i use the default conf that comes with the distro..and in about some hours it simply dies... Thank you very much

    Read the article

  • SPARC64 VII+ Processor Core License Factor Reduced by 33%

    - by john.shell
    The Oracle processor core license factor has been a popular topic the last few months.  For those partners new to Oracle software licensing, the processor core license factor determines the number licensed CPUs that are required when running Oracle software (those charged on a per-CPU basis) on multi-core processors.My last entry talked about the core factor reduction for our T3 processor.  The core license factor for our newly announced SPARC64 VII+ processor is 0.5, which is a 33% reduction from the 0.75 rate used with our SPARC64 VI and VII processors.What does this mean for our partners?  Increased opportunity.  This change, similar to our T3-based systems, means that our hardware is the preferred platform for Oracle software. Still a little dizzy on the breadth of Oracle's software offering?  Do a simple scan of Oracle's software price lists. Consider this your target market.This change allows you to focus on total solution price or price/performance, not server prices or per core performance (a standard IBM sales tactic). That's the offensive side of the game.  Don't forget your defense.  One of the biggest customer benefits around the M-Series is investment protection.  The combination of a simple processor/board upgrade, along with a reduction in processor core license factor, makes upgrading one of the best financial moves for our customers.    One reminder.  The update to the processor core license factor only applies to the new VII+ processor - NOT the SPARC64 VI or VII processors.  You can find the official table here.

    Read the article

  • Power Dynamic Database-Driven Websites with MySQL & PHP

    - by Antoinette O'Sullivan
    Join major names among MySQL customers by learning to power dynamic database-driven websites with MySQL & PHP. With the MySQL and PHP: Developing Dynamic Web Applications course, in 4 days, you learn how to develop applications in PHP and how to use MySQL efficiently for those applications! Through a hands-on approach, this instructor-led course helps you improve your PHP skills and combine them with time-proven database management techniques to create best-of-breed web applications that are efficient, solid and secure. You can currently take this course as a: Live Virtual Class (LVC): There are a number events on the schedule to suit different timezones in January 2013 and March 2013. With an LVC, you get to follow this live instructor-led class from your own desk - so no travel expense or inconvenience. In-Class Event: Travel to an education center to attend this class. Here are some events already on the scheduled:  Where  When  Delivery Language  Lisbon, Portugal  15 April 2013  European Portugese  Porto, Portugal 15 April 2013   European Portugese  Barcelona, Spain 28 February 2013  Spanish  Madrid, Spain 4 March 2013   Spanish If you do not see an event that suits you, register your interest in an additional date/location/delivery language. If you want more indepth knowledge on developing with MySQL and PHP, consider the MySQL for Developers course. For full details on these and all courses on the authentic MySQL curriculum, go to http://oracle.com/education/mysql.

    Read the article

  • JavaCV IplImage to LWJGL Texture

    - by rendrag
    As a side project I've been attempting to make a dynamic display (for example a screen within a game) that shows images from my webcam. I've been messing around with JavaCV and LWJGL for the past few months and have a basic understanding of how they both work. I found this after scouring google, but I get an error that the ByteBuffer isn't big enough. IplImage img = cam.getFrame(); ByteBuffer buffer = img.asByteBuffer(); int textureID = glGenTextures(); //Generate texture ID glBindTexture(GL_TEXTURE_2D, textureID); //Bind texture ID //I don't know how much of the following is necessary //Setup wrap mode glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL12.GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL12.GL_CLAMP_TO_EDGE); //Setup texture scaling filtering glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //Send texture data to OpenGL - this is the line that actually does stuff and that OpenGL has a problem with glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL12.GL_BGR, GL_UNSIGNED_BYTE, buffer); That last line throws this- Exception in thread "Thread-0" java.lang.IllegalArgumentException: Number of remaining buffer elements is 144, must be at least 921600. Because at most 921600 elements can be returned, a buffer with at least 921600 elements is required, regardless of actual returned element count at org.lwjgl.BufferChecks.throwBufferSizeException(BufferChecks.java:162) at org.lwjgl.BufferChecks.checkBufferSize(BufferChecks.java:189) at org.lwjgl.BufferChecks.checkBuffer(BufferChecks.java:230) at org.lwjgl.opengl.GL11.glTexImage2D(GL11.java:2845) at tests.TextureTest.getTexture(TextureTest.java:78) at tests.TextureTest.update(TextureTest.java:43) at lib.game.AbstractGame$1.run(AbstractGame.java:52) at java.lang.Thread.run(Thread.java:679)

    Read the article

  • gparted won't boot from liveCD

    - by ant2009
    Hello, Fedora 12 2.6.32.9-70.fc12.i686 I downloaded the gparted iso and burnt it to a CD. gparted-live-0.5.2-1 This GParted Live was created by: create-gparted-live -l en -b u -e e -d sid -m http://free.nchc.org.tw/debian -s http://free.nchc.org.tw/debian-security -g http://free.nchc.org.tw/drbl-core -n 2.6.32-3 -i 0.5.2-1 The files and folders: isolinux live syslinux COPYING GParted-Live-Version I put the disk in and reboot. Nothing happens. I just get a flashing cursor in the top left hand corner. I then have to switch of the power button to get it to reboot. I have done this a number of time and the results are the same. Can anyone tell me how to boot from the gparted liveCD? Many thanks for any suggestions,

    Read the article

< Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >