Search Results

Search found 10806 results on 433 pages for 'kate moss big fan'.

Page 74/433 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • More Maintenance Plan Weirdness

    - by AjarnMark
    I’m not a big fan of the built-in Maintenance Plan functionality in SQL Server.  I like the interface in SQL 2005 better than 2000 (it looks more like building an SSIS package) but it’s still a bit of a black box.  You don’t really know what commands are being run based on the selections you have made, and you can easily make some unwise choices without realizing it, such as shrinking your database on a regular basis.  I really prefer to know exactly what commands and with which options are being run on my servers. Recently I had another very strange thing happen with a Maintenance Plan, this time in SQL 2005, SP3.  I inherited this server and have done a bit of cleanup on it, but had not yet gotten around to replacing the Maintenance Plans with all my own scripts.  However, one of the maintenance plans which was just responsible for doing LOG backups was running more frequently than that system needed, and I thought I would just tweak the schedule a bit.  So I opened the Maintenance Plan and edited the properties of the Subplan, setting a new schedule, saved it and figured all was good to go.  But the next execution of the Scheduled Job that triggers the Maintenance Plan code failed with an error about the Owner of the job.  Specifically the error was, “Unable to determine if the owner (OldDomain\OldDBAUserID) of job MaintenancePlanName.Subplan has server access (reason: Could not obtain information about Windows NT group/user 'OldDomain\OldDBAUserID’..”  I was really confused because I had previously updated all of the jobs to have current accounts as the owners.  At first I thought it was just a fluke, but it happened on the next scheduled cycle so I investigated further and sure enough, that job had the old DBA’s account listed as the owner.  I fixed it and the job successfully ran to completion. Now, I don’t really like mysteries like that, so I did some more testing and verified that, sure enough, just editing the Subplan schedule and saving the Maintenance Job caused the Scheduled Job to be recreated with the old credentials.  I don’t know where it is getting those credentials, but I can only assume that it is the same as the original creator of the Maintenance Plan, and for some reason it insists on using that ID for the job owner.  I looked through the options in SSMA and could not find anything would let me easily set the value that I wanted it to use.  I suspect that if I did something like executing sp_changeobjectowner against the Maintenance Plan that it would use that new ID instead.  I’m sure that there is good reason that it works this way, but rather than mess around with it much more, I’m just going to spend my time rolling out my replacement scripts instead. Chalk this little hidden oddity up as yet one more reason I’m not a fan of Maintenance Plans.

    Read the article

  • Drive Innovation from Data with Oracle Business Analytics

    - by Mike.Hallett(at)Oracle-BI&EPM
    Oracle is doing a big marketing push on the transformational value of Business Analytics to our customers, and we hope you as partners can get excited, involved and more business from this campaign.  Work with your local in-country BI business development manager and your partner channel manager: if you want to contribute and are struggling to make contact, then let me know ([email protected]) and I will facilitate introductions. Oracle Day Business Analytics Track Invite your customers to register for their local Oracle Day to get the latest news from OpenWorld and learn about Oracle's Big Data strategy and solution. There is a dedicated Business Analytics track. Business Analytics Facebook Hub Encourage your customers to "Like" the Business Analytics Facebook Page @ www.facebook.com/OracleBusinessAnalytics so they can receive useful and interesting information on their Facebook wall.

    Read the article

  • Find the latest file by modified date

    - by Rich
    If I want to find the latest file (mtime) in a (big) directory containing subdirectories, how would I do it? Lots of posts I've found suggest some variation of ls -lt | head (amusingly, many suggest ls -ltr | tail which is the same but less efficient) which is fine unless you have subdirectories (I do). Then again, you could find . -type f -exec ls -lt \{\} \+ | head which will definitely do the trick for as many files as can be specified by one command, i.e. if you have a big directory, -exec...\+ will issue separate commands; therefore each group will be sorted by ls within itself but not over the total set; the head will therefore pick up the lastest entry of the first batch. Any answers?

    Read the article

  • Macbook Pro 2011 compatibility

    - by ldx
    Hi there, I'm planning to a buy a new 13" Macbook Pro, the one that was just released this week with the Thunderbolt port. The question is, has anyone given it a shot with Ubuntu (10.10 or 11.04 alpha)? I'd be especially interested whether temperature sensors/fan control, external displays via the displayport and 3D acceleration (for Compiz or some simple 3D games) via the integrated HD3000 GPU work without flaws. Thanks!

    Read the article

  • Drawing lots of tiles with OpenGL, the modern way

    - by Nic
    I'm working on a small tile/sprite-based PC game with a team of people, and we're running into performance issues. The last time I used OpenGL was around 2004, so I've been teaching myself how to use the core profile, and I'm finding myself a little confused. I need to draw in the neighborhood of 250-750 48x48 tiles to the screen every frame, as well as maybe around 50 sprites. The tiles only change when a new level is loaded, and the sprites are changing all the time. Some of the tiles are made up of four 24x24 pieces, and most (but not all) of the sprites are the same size as the tiles. A lot of the tiles and sprites use alpha blending. Right now I'm doing all of this in immediate mode, which I know is a bad idea. All the same, when one of our team members tries to run it, he gets very bad frame rates (~20-30 fps), and it's much worse when there are more tiles, especially when a lot of those tiles are the kind that are cut into pieces. This all makes me think that the problem is the number of draw calls being made. I've thought of a few possible solutions to this, but I wanted to run them by some people who know what they're talking about so I don't waste my time on something stupid: TILES: When a level is loaded, draw all the tiles once into a frame buffer attached to a big honking texture, and just draw a big rectangle with that texture on it every frame. Put all the tiles into a static vertex buffer when the level is loaded, and draw them that way. I don't know if there's a way to draw objects with different textures with a single call to glDrawElements, or if this is even something I'd want to do. Maybe just put all the tiles into a big giant texture and use funny texture coordinates in the VBO? SPRITES: Draw each sprite with a separate call to glDrawElements. Use a dynamic VBO somehow. Same texture question as number 2 above. Point sprites? This is probably silly. Are any of these ideas sensible? Is there a good implementation somewhere I could look over?

    Read the article

  • PHP and performance

    - by Naif
    I always hear that PHP is for medium and small websites whereas .NET and Java for enterprise applications. My question is about PHP. Why is PHP not a good option for enterprise web applications? Is it because if the web application becomes bigger then PHP will be slower as it is an interpreted language? I know that corporate world will choose .NET or J2EE because of the integration with their products and because of back end services, etc. However, if we just have PHP for building sites and web applications then how can we use it to perform well with big sites? In short, Is there a relationship between the performance of PHP and the size of the website? What are the factors that make PHP not appropriate option for big sites?

    Read the article

  • Does Ubuntu run well on an USB HDD?

    - by Klaus
    I have here a company notebook, and because the HDD is full encrypted, I cannot install an extra partition for another system that I would like to use in my free time. And I really need another system, because this crap Windows here with that much of anti-virus, anti-spyware, anti-whatever on it is so slow and annoying. What can I do? I could use an external USB HDD with another system. Because I would like to handle big files and so on, I don't want to use a USB stick. A USB 2.5 HDD + Ubuntu is what I think the best option. Here are my questions: Do I have to note something? Does Ubuntu run well on an external HDD? Do I have big performance problems (because of the USB HDD)? Should I buy a very fast HDD for much money or it is not that important? Any suggestions?

    Read the article

  • UI Design Patterns : Are you developing a Fusion Apps extension, an ADF or Webcenter App?

    - by asantaga
    A big question I get asked when speaking to partners who are developing Oracle ADF, or Webcenter, Apps is how to make it look nice.. Some of the big SIs ask me, "Do we have any design patterns/guidelines we can use?". .. Alas website design is a very personal thing and each website will have different requirements and needs, however I am now pleased to say we've just launched "Oracle Fusion Applications Design Patterns" website.   The website is the result of many years of Oracle R&D into user interface design for Fusion applications and features a really cool web app which allows you to visualise the UI components in action. Although many of the design patterns are related to ADF , its worth noting that ADF took its lead from Oracle Fusion Applications User Interface needs - not the other way around, its just taken us a while to publish these. Coupled together with the dashboard patterns this makes are really cool extra asset for your kit bag Design Patterns Oracle dashboard patterns and guidelines Usable Apps.oracle.com Enjoy

    Read the article

  • What is the difference between industrial development and open source development?

    - by Ida
    Intuitively, I think open source development should be much more "casual" than industrial development process (like in Microsoft). Because for OSS development: Duty separation is not that strict than in big companies (maybe developers == testers in open source development?) People come in and out of the open source community, much more frequently than in big companies However, above are just my guesses. I really want to know more about the major difference between the open source and industrial development. Is their division of duty totally different (e.g., is there a leader/manager-like role in open source development?)? Maybe it is their communication style that differs a lot? Or their workflow? Please share your opinions. Thanks a lot!

    Read the article

  • Upcoming Database Design Pre-Cons

    - by drsql
    In July and October, I will be doing my "How To Design a Relational Database" full day conference in two places. First on July 26 for the East Iowa SQL Saturday , and then for the big daddy SQLPASS Summit in Charlotte, NC on October 14. You can see the entire abstract here on the SQL PASS site. It is essentially the same concept as last year, but this year I am making a few big changes to really give the people what they have desired (and am truly glad to have a swing at it several months...(read more)

    Read the article

  • Hello World

    - by prabhpreet
    Hello World. I am a hobbyist developer in the teens and I am a fan of Microsoft and its products.I am learning C# and have learned C and experimented with a few languages such as Python, Ruby, and IO (A really new language). Here, I am going to share my developing adventures. Watch out, World!

    Read the article

  • Which programming career path fits my terms? [closed]

    - by Goward Gerald
    I am sick and tired of my enterprise development job, I need some programming direction like this: Demanded in jobs-market Demanded in freelance market Can use Ubuntu as development environment Not enterprise. Standalone, mobile, web-development, anything, just not enterprise. Basically, I need a programming direction which doesn't need 20 developers, terribly big databases systems and long going projects with intense long-term support, I don't want enterprise job where a lot of people are working on one terribly big project and do modules to it all day long. Instead, I need something where: Projects change pretty often Projects are little, or medium-sized (in terms of code, modules and people working on it) but still not enterprise-sized Possible for freelance, solo-development, or at least requires a team of 3-4 programmers. Not like in enterprise where you feel like a drop in the sea with your 50 classes while system itself has hundreds of classes. Suggestions please?

    Read the article

  • Is Ubuntu running well on an usb hdd? Need suggestions

    - by Klaus
    Dear Linux and Ubuntu pros, I have here a company notebook, and because the hdd is full encrypted I cannot install an extra partition for another system that I would like to use in my free time. And I really need another system, because this crap windows here with that much of antivirus, antispyware, anti-whatever on it is sooo slow and anoying. What can I do? I could use an external usb hdd with another system. Because I would like to handle big files and so on, I dont want to use an sub stick. An usb 2.5hdd + ubuntu is what I think the best option. Here are my question: Do I have to note something? Is Ubuntu running well on an external hdd? Do I have big performance problems (because of the usb hdd)? Should I buy a very fast hdd for much money or is it not that important? Any suggestions? Thank you :)

    Read the article

  • Scrum Board for a distributed team

    - by Falcon
    I am looking for recommendations on a digital Scrum Board which can be shared over the internet. I imagine something like a big tablet on which you can draw and which remote users can access, too. I dislike Scrum software because I think one major benefit of a Scrum Board is its physical presence. It should be hard to ignore. The best solution would be two big tablets on which you can draw and which can be synchronized. Has anyone got product recommendations for something like this? Or would you rather use a software? Kind regards, Falcon

    Read the article

  • Can Ubuntu Unity be made as snappy as Xubuntu?

    - by subeh.sharma
    I am fan of Xubuntu just because of its snappiness. Now i know that it is based on light-weight XFCE which is the secret for this snappiness but I am just wondering if something could be done on Unity to bring it, say, close to that snappiness? I have not installed NVIDIA's driver as I have never seen any improvements on Ubuntu. Would love to hear views on this in case somebody have been able to tweak some settings.

    Read the article

  • 4.8M wasn't enough so we went for 5.055M tpmc with Unbreakable Enterprise Kernel r2 :-)

    - by wcoekaer
    We released a new set of benchmarks today. One is an updated tpc-c from a few months ago where we had just over 4.8M tpmc at $0.98 and we just updated it to go to 5.05M and $0.89. The other one is related to Java Middleware performance. You can find the press release here. Now, I don't want to talk about the actual relevance of the benchmark numbers, as I am not in the benchmark team. I want to talk about why these numbers and these efforts, unrelated to what they mean to your workload, matter to customers. The actual benchmark effort is a very big, long, expensive undertaking where many groups work together as a big virtual team. Having the virtual team be within a single company of course helps tremendously... We already start with a very big server setup with tons of storage, many disks, lots of ram, lots of cpu's, cores, threads, large database setups. Getting the whole setup going to start tuning, by itself, is no easy task, but then the real fun starts with tuning the system for optimal performance -and- stability. A benchmark is not just revving an engine at high rpm, it's actually hitting the circuit. The tests require long runs, require surviving availability tests, such as surviving crashes -and- recovery under load. In the TPC-C example, the x4800 system had 4TB ram, 160 threads (8 sockets, hyperthreaded, 10 cores/socket), tons of storage attached, tons of luns visible to the OS. flash storage, non flash storage... many things at high scale that all have to be perfectly synchronized. During this process, we find bugs, we fix bugs, we find performance issues, we fix performance issues, we find interesting potential features to investigate for the future, we start new development projects for future releases and all this goes back into the products. As more and more customers, for Oracle Linux, are running larger and larger, faster and faster, more mission critical, higher available databases..., these things are just absolutely critical. Unrelated to what anyone's specific opinion is about tpc-c or tpc-h or specjenterprise etc, there is a ton of effort that the customer benefits from. All this work makes Oracle Linux and/or Oracle Solaris better platforms. Whether it's faster, more stable, more scalable, more resilient. It helps. Another point that I always like to re-iterate around UEK and UEK2 : we have our kernel source git repository online. Complete changelog of the mainline kernel, and our changes, easy to pull, easy to dissect, easy to know what went in when, why and where. No need to go log into a website and manually click through pages to hopefully discover changes or patches. No need to untar 2 tar balls and run a diff.

    Read the article

  • #OOW 2012 : IaaS, Private Cloud, Multitenant Database, and X3H2M2

    - by Eric Bezille
    The title of this post is a summary of the 4 announcements made by Larry Ellison today, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait a little, as I will go in order, beginning with the IaaS - Infrastructure as a Service - announcement. Oracle IaaS goes Public... and Private... Starting in 2004 with Fusion development, Oracle Cloud was launch last year to provide not only SaaS Application, based on standard development, but also the underlying PaaS, required to build the specifics, and required interconnections between applications, in and outside of the Cloud. Still, to cover the end-to-end Cloud  Services spectrum, we had to provide an Infrastructure as a Service, leveraging our Servers, Storage, OS, and Virtualization Technologies, all "Engineered Together". This Cloud Infrastructure, was already available for our customers to build rapidly their own Private Cloud either on SPARC/Solaris or x86/Linux... The second announcement made today bring that proposition a big step further : for cautious customers (like Banks, or sensible industries) who would like to benefits from the Cloud value of "as a Service", but don't want their Data out in the Cloud... We propose to them to operate the same systems, Exadata, Exalogic & SuperCluster, that are providing our Public Cloud Infrastructure, behind their firewall, in a Private Cloud model. Oracle 12c Multitenant Database This is also a major announcement made today, on what's coming with Oracle Database 12c : the ability to consolidate multiple databases with no extra additional  cost especially in terms of memory needed on the server node, which is often THE consolidation limiting factor. The principle could be compare to Solaris Zones, where, you will have a Database Container, who is "owning" the memory and Database background processes, and "Pluggable" Database in this Database Container. This particular feature is a strong compelling event to evaluate rapidly Oracle Database 12c once it will be available, as this is major step forward into true Database consolidation with Multitenancy on a shared (optimized) infrastructure. X3H2M2, enabling the new Exadata X3 in-Memory Database Here we are :  X3H2M2 stands for X3 (the new version of Exadata announced also today) Heuristic Hierarchical Mass Memory, providing the capability to keep most if not all the Data in the memory cache hierarchy. Of course, this is the major software enhancement of the new X3 Exadata machine, but as this is a software, our current customers would be able to benefit from it on their existing systems by upgrading to the new release. But that' not the only thing that we did with X3, at the same time we have upgraded everything : the CPUs, adding more cores per server node (16 vs. 12, with the arrival of Intel E5 / Sandy Bridge), the memory with 512GB memory as well per node,  and the new Flash Fire card, bringing now up to 22 TB of Flash cache. All of this 4TB of RAM + 22TB of Flash being use cleverly not only for read but also for write by the X3H2M2 algorithm... making a very big difference compare to traditional storage flash extension. But what does those extra performances brings to you on an already very efficient system: double your performances compare to the fastest storage array on the market today (including flash) and divide you storage price x10 at the same time... Something to consider closely this days... Especially that we also announced the availability of a new Exadata X3-2 8th rack : a good starting point. As you have seen a major opening for this year again with true innovation. But that was not the only thing that we saw today, as before Larry's talk, Fujitsu did introduce more in deep the up coming new SPARC processor, that they are co-developing with us. And as such Andrew Mendelsohn - Senior Vice President Database Server Technologies came on stage to explain that the next step after I/O optimization for Database with Exadata, was to accelerate the Database at execution level by bringing functions in the SPARC processor silicium. All in all, to process more and more Data... The big theme of the day... and of the Oracle User Groups Conferences that were also happening today and where I had the opportunity to attend some interesting sessions on practical use cases of Big Data one in Finances and Fraud profiling and the other one on practical deployment of Oracle Exalytics for Data Analytics. In conclusion, one picture to try to size Oracle Open World ... and you can understand why, with such a rich content... and this only the first day !

    Read the article

  • Is it normal for programmer to work on multiple projects simultaneously.

    - by gasan
    On a current job I have 2 projects to work on. First is very huge system and the second one is smaller but it also big (first project is being developed for 12 years, second for 4 years). At first I was working only on first project and was trying to get used to it. Then I was moved to second project and tried there, so my knowledge about first project became shady. Now I have to work on both projects at the same time. It's very hard for me because despite they both use java, they use different frameworks and the amount of code and business-logic to understand is very big so I really can't hold both that projects in my head. Is it normal and I should get used to it, although my expertise became very squashy, what won't happen if I would work only on a single project? Or should I raise a concern or maybe change employer?

    Read the article

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >