Search Results

Search found 3516 results on 141 pages for 'malloc history'.

Page 65/141 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Windows: starting sqlplus in new window from cygwin bash

    - by katsumii
    When I start sqlplus, more often than not, I want it to start in new window,whether it be on Linux/Solaris GNOME or Windows.I seldom use GNOME so I never bothered to figure out how.On Windows, one can use Windows menu or Win+R "Run" dialog but I prefer usingbash. Because, this way, I can keep the history in ~/.bash_history file.There are 2 ways. Using cmd.exe or cygstart.For example, to start default sqlplus.exe to connect to default local instance. $ cmd /c "start sqlplus sys/oracle as sysdba" 2nd example. To start sqlplus in 2nd Oracle home and to connect to non-default local instance. $ ORACLE_SID=orcl cygstart /cygdrive/g/app/product/11.2.0.3/dbhome_1/BIN/sqlplus scott I hope this tip helps reducing your DBA time.

    Read the article

  • Microsoft Sql Server 2008 R2 System Databases

    For a majority of software developers little time is spent understanding the inner workings of the database management systems (DBMS) they use to store data for their applications.  I personally place myself in this grouping. In my case, I have used various versions of Microsoft’s SQL Server (2000, 2005, and 2008 R2) and just recently learned how valuable they really are when I was preparing to deliver a lecture on "SQL Server 2008 R2, System Databases". Microsoft Sql Server 2008 R2 System DatabasesSo what are system databases in MS SQL Server, and why should I know them? Microsoft uses system databases to support the SQL Server DBMS, much like a developer uses config files or database tables to support an application. These system databases individually provide specific functionality that allows MS SQL Server to function. Name Database File Log File Master master.mdf mastlog.ldf Resource mssqlsystemresource.mdf mssqlsystemresource.ldf Model model.mdf modellog.ldf MSDB msdbdata.mdf msdblog.ldf Distribution distmdl.mdf distmdl.ldf TempDB tempdb.mdf templog.ldf Master DatabaseIf you have used MS SQL Server then you should recognize the Master database especially if you used the SQL Server Management Studio (SSMS) to connect to a user created database. MS SQL Server requires the Master database in order for DBMS to start due to the information that it stores. Examples of data stored in the Master database User Logins Linked Servers Configuration information Information on User Databases Resource DatabaseHonestly, until recently I never knew this database even existed until I started to research SQL Server system databases. The reason for this is due largely to the fact that the resource database is hidden to users. In fact, the database files are stored within the Binn folder instead of the standard MS SQL Server database folder path. This database contains all system objects that can be accessed by all other databases.  In short, this database contains all system views and store procedures that appear in all other user databases regarding system information. One of the many benefits to storing system views and store procedures in a single hidden database is the fact it improves upgrading a SQL Server database; not to mention that maintenance is decreased since only one code base has to be mainlined for all of the system views and procedures. Model DatabaseThe Model database as the name implies is the model for all new databases created by users. This allows for predefining default database objects for all new databases within a MS SQL Server instance. For example, if every database created by a user needs to have an “Audit” table when it is  created then defining the “Audit” table in the model will guarantees that the table will be located in every new database create after the model is altered. MSDB DatabaseThe MSDBdatabase is used by SQL Server Agent, SQL Server Database Mail, SQL Server Service Broker, along with SQL Server. The SQL Server Agent uses this database to store job configurations and SQL job schedules along with SQL Alerts, and Operators. In addition, this database also stores all SQL job parameters along with each job’s execution history.  Finally, this database is also used to store database backup and maintenance plans as well as details pertaining to SQL Log shipping if it is being used. Distribution DatabaseThe Distribution database is only used during replication and stores meta data and history information pertaining to the act of replication data. Furthermore, when transactional replication is used this database also stores information regarding each transaction. It is important to note that replication is not turned on by default in MS SQL Server and that the distribution database is hidden from SSMS. Tempdb DatabaseThe Tempdb as the name implies is used to store temporary data and data objects. Examples of this include temp tables and temp store procedures. It is important to note that when using this database all data and data objects are cleared from this database when SQL Server restarts. This database is also used by SQL Server when it is performing some internal operations. Typically, SQL Server uses this database for the purpose of large sort and index operations. Finally, this database is used to store row versions if row versioning or snapsot isolation transactions are being used by SQL Server. Additionally, I would love to hear from others about their experiences using system databases, tables, and objects in a real world environments.

    Read the article

  • Troubleshooting Blocked Transaction in SQL Server

    - by ChrisD
    While troubleshooting a blocked transaction issue recently, I found this code online.  My apologies in not citing its source, but its lost in my browse history some where.   While the transaction is executing and blocked, open a connection to the database containing the transaction and run the following to return both the SQL statement blocked (the Victim), as well as the statement that’s causing the block (the Culprit)   -- prepare a table so that we can filter out sp_who2 results DECLARE @who TABLE(BlockedId INT, Status VARCHAR(MAX), LOGIN VARCHAR(MAX), HostName VARCHAR(MAX), BlockedById VARCHAR(MAX), DBName VARCHAR(MAX), Command VARCHAR(MAX), CPUTime INT, DiskIO INT, LastBatch VARCHAR(MAX), ProgramName VARCHAR(MAX), SPID_1 INT, REQUESTID INT) INSERT INTO @who EXEC sp_who2 --select the blocked and blocking queries (if any) as SQL text SELECT ( SELECT TEXT FROM sys.dm_exec_sql_text( (SELECT handle FROM ( SELECT CAST(sql_handle AS VARBINARY(128)) AS handle FROM sys.sysprocesses WHERE spid = BlockedId ) query) ) ) AS 'Blocked Query (Victim)', ( SELECT TEXT FROM sys.dm_exec_sql_text( (SELECT handle FROM ( SELECT CAST(sql_handle AS VARBINARY(128)) AS handle FROM sys.sysprocesses WHERE spid = BlockedById ) query) ) ) AS 'Blocking Query (Culprit)' FROM @who WHERE BlockedById != ' .'

    Read the article

  • Using C++ but not using the language's specific features, should switch to C?

    - by Petruza
    I'm developing a NES emulator as a hobby, in my free time. I use C++ because is the language I use mostly, know mostly and like mostly. But now that I made some advance into the project I realize I'm not using almost any specific features of C++, and could have done it in plain C and getting the same result. I don't use templates, operator overloading, polymorphism, inheritance. So what would you say? should I stay in C++ or rewrite it in C? I won't do this to gain in performance, it could come as a side effect, but the idea is why should I use C++ if I don't need it? The only features of C++ I'm using is classes to encapsulate data and methods, but that can be done as well with structs and functions, I'm using new and delete, but could as well use malloc and free, and I'm using inheritance just for callbacks, which could be achieved with pointers to functions. Remember, it's a hobby project, I have no deadlines, so the overhead time and work that would require a re-write are not a problem, might be fun as well. So, the question is C or C++?

    Read the article

  • The JCP Celebrates 15 Years in 2014

    - by Heather VanCura
    The JCP Program is celebrating fifteen years of collaborative work from companies, academics, individual developers and not-for-profits from all over the world who have come together to develop Java technology through the JCP.  In June, we held a party at the Computer History Museum in Mountain View, California in conjunction with the Silicon Valley Java User Group (SVJUG). You can check out the Nighthacking videos and pictures from the party: Video Interview with James Gosling Video Interview with Van Riper & Kevin Nilson Video Interview with Rob Gingell If you missed the party, we have kits for Java User Groups (JUG) to order to celebrate with your Java User Group (JUG) in 2014.  Fill out the order form and we will send a presentation, party favors, posters and a raffle item for your local JUG 15 year JCP Celebration! And next month we will have another celebration during the annual JavaOne Conference in San Francisco.  The JCP Party & Awards ceremony will be Monday, 29 September at the Hilton in Union Square.  Reserve your ticket early!

    Read the article

  • When designing a job queue, what should determine the scope of a job?

    - by Stuart Pegg
    We've got a job queue system that'll cheerfully process any kind of job given to it. We intend to use it to process jobs that each contain 2 tasks: Job (Pass information from one server to another) Fetch task (get the data, slowly) Send task (send the data, comparatively quickly) The difficulty we're having is that we don't know whether to break the tasks into separate jobs, or process the job in one go. Are there any best practices or useful references on this subject? Is there some obvious benefit to a method that we're missing? So far we can see these benefits for each method: Split Job lease length reflects job length: Rather than total of two Finer granularity on recovery: If we lose outgoing connectivity we can tell them all to retry The starting state of the second task is saved to job history: Helps with debugging (although similar logging could be added in single task method) Single Single job to be scheduled: Less processing overhead Data not stale on recovery: If the outgoing downtime is quite long, the pending Send jobs could be outdated

    Read the article

  • Ubuntu Gnome 14.04 - 100% CPU usage alternating between cores

    - by AwDeOh
    I've noticed my Ubuntu Gnome 14.04 has been getting a bit sluggish lately - things like Gnome Shell overview animation are jerky where they were lightning fast, Elder Scrolls Online is stuttering and dropping to low FPS where I previously had a solid 50-60 fps. Out of interest I looked at the CPU History, and when running nothing but the system monitor, I was getting this: That was 15 minutes ago. The 100% load seemed to be alternating between the cores. PC specs: i3 2130 processor. 8gb DDR3 RAM. ASUS P8-Z77M motherboard. Samsung 128gb SSD I've been trying to reproduce the problem, and while I'm not getting the 100% any more at idle, the system monitor is showing an average load of about 20-30%, that's with just Chrome and the System Monitor open. Oddly, if I touch nothing, it'll average out to about 20% - if I start moving the mouse around and do some typing, it's closer to 40%. Is this normal? Any help appreciated, I wouldn't even know where to start here..

    Read the article

  • Field of Poppies Wallpaper

    - by Asian Angel
    Poppies Field [DesktopNexus] Latest Features How-To Geek ETC Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 Access the Options for Your Favorite Extensions Easier in Firefox Don’t Sleep Keeps Your Windows Machine Awake DropSpace Syncs Android Files to Dropbox Field of Poppies Wallpaper The History Of Operating Systems [Infographic] DriveSafe.ly Reads Your Text Messages Aloud

    Read the article

  • Code maintenance: To add comments in code or to just leave it to the version control?

    - by Chillax
    We have been asked to add comments with start tags, end tags, description, solution etc for each change that we make to the code as part of fixing a bug / implementing a CR. My concern is, does this provide any added value? As it is, we have all the details in the Version control history, which will help us to track each and every change? But my leads are insisting on having the comments as a "good" programming practice. One of their argument is when a CR has to be de-scoped/changed, it would be cumbersome if comments are not there. Considering that the changes would be largely in between code, would it really help to add comments for each and every change we make? Shouldn't we leave it to the version control?

    Read the article

  • Using Google Analytics to determine how much time a visitor spends in each section of my site

    - by flossfan
    I have a site with various pages, like: /about/history /about/team /contact/email-us /contact I want to figure out how much time people are spending on the entire /about section, and how much on the /contact section. If I run a query on the Google Analytics API and set the dimension to ga:pagePathLevel1 and the metric to ga:avgTimeOnPage, I get results like this: { pagePathLevel1: /about, avgTimeOnPage: 28 }, { pagePathLevel1: /contact, avgTimeOnPage: 10 } This looks roughly like what I want, but I'm not sure how to intepret it: Is the value of avgTimeOnPage the average time spent by any user on all pages that match that path? Or is it the average time spent by any user on any single page that matches that path? I'm looking for the average time spent across all pages matching that path, but the time estimates look shorter than I'd expect.

    Read the article

  • Jaroslav Tulach's Report on NetBeans at OSGiCon

    - by Geertjan
    The latest NetBeans Podcast was recorded over the last few weeks and released yesterday. Aside from the NetBeans news items and interviews (interesting stuff about Joel Murach's new Java book using NetBeans, as well as the new developments in the NetBeans Groovy editor), there is, as always an "API Design Tip" of the podcast. That's really worth listening to, always of course, but especially this time because here Jaroslav Tulach talks at some length about his recent trip to OSGiCon, as well as the history and status of OSGi support in NetBeans IDE. Start listening from just before the 30th minute (i.e., the final segment) if you're interested in this particular topic: https://blogs.oracle.com/nbpodcast/entry/netbeans_podcast_60 For example, hear about how JDeveloper got faster by switching from Equinox to Netbinox. And... will Eclipse find itself on the same OSGi container too?

    Read the article

  • What is the term for a really BIG source code commit?

    - by Ida
    Sometimes when we check the commit history of a software, we may see that there are a few commits that are really BIG - they may change 10 or 20 files with hundreds of changed source code lines (delta). I remember that there is a commonly used term for such BIG commit but I can't recall exactly what that term is. Can anyone help me? What is the term that programmers usually use to refer to such BIG and giant commit? BTW, is committing a lot of changes all together a good practice? UPDATE: thank you guys for the inspiring discussion! But I think "code bomb" is the term that I'm looking for.

    Read the article

  • Reliability Monitor is the Best Windows Troubleshooting Tool You Aren’t Using

    - by The Geek
    When it comes to hidden gems in Windows, nothing beats the Reliability monitor tool, hidden behind a link inside of another tool that you don’t use either. Why Microsoft doesn’t shine more light on this really useful troubleshooting tool, we’ll never know. Reliability Monitor tracks the history of your computer — any time an application crashes, hangs, or Windows gives you a blue screen of death. It also tracks other important events, like when software is installed, or Windows Updates loads a new patch. It’s an extremely useful tool. And yes, it’s in Windows 7 and 8… and even 8.1. It might be in Vista, but who uses that anymore?    

    Read the article

  • SSMS Tools Pack 1.9.4 is out! Now with SQL Server 2011 (Denali) CTP1 support.

    - by Mladen Prajdic
    To end the year on a good note this release adds support for SQL Server 2011 (Denali) CTP1 and fixes a few bugs. Because of the new SSMS shell in SQL 2011 CTP1 the SSMS Tools Pack 1.9.4 doesn't have regions and debug sections functionality for now. The fixed bugs are: A bug that prevented to create insert statements for a database A bug that didn't script commas as decimal points correctly for non US settings. A bug with searching through grid results. A threading bug that sometimes happened when saving Window Content History. A bug with Window Connection Coloring throwing an error on startup if a server colors was undefined. A bug with changing shortcuts in SSMS for various features. You can download the new version 1.9.4 here. Enjoy it!

    Read the article

  • git changing head not reflected on co-dev's branch

    - by stevekrzysiak
    Basically, we undid history. I know this is bad, and I am already committed to avoiding this at all costs in the future, but what is done is done. Anyway, I issued a git push origin <1_week_old_sha:master to undo some bad commits. I then deleted a buggered branch called release(which had also received some bad commits) from remote and then branched a new release off master. I pushed this to remote. So basically, remote master & release are clones and just how I want them. The issue is if I clone the repo anew(or work in my current repo) everything looks great....but when my co-devs delete their release branch and create a new one based off the new remote release I created, they still see all the old junk I tried to remove. I feel this has to do with some local .git files mistaking the new branch release for the old release. Any thoughts? Thanks.

    Read the article

  • Learn about CRM and CX at Oracle Days 2012

    - by Richard Lefebvre
    Oracle Day 2012 features learning tracks and sessions tailored for accelerating your business in today’s environment. Oracle simplifies IT by investing in best-of-breed technologies at every layer of the technology stack and engineering them to work together so you can focus on driving your business forward. Throughout its history, Oracle has proved it can address the most complex IT challenges and solve the business problems of our customers. Discover Oracle’s strategy for powering innovation in the areas of Cloud, Social, Mobile, Business Operations, Data Center Optimization, Big Data and Analytics. Oracle Day 2012: Tracks     Engine for Growth: The business for optimized data center Powering innovation for your enterprise applications Architect your cloud: A blueprint for Cloud builders See more, Act faster: powering innovation with analytics Business operations: Powering business innovation Customer Experience: Empowering people, powering brands Check out the agenda at local even for more details

    Read the article

  • How can I get gcc to write a file larger than 2.0 GB?

    - by fred.bear
    I wanted to recompile 'xxd' (written in C), so I installed CodeBlocks as the IDE. All seemed to go well unil I discovered that I couldn't write past the 2.0 GB barrier... I've read that 'gcc' needs to be recompiled... (That sounds a bit dramatic..) I've read that I can use 'fread64()' instead of 'fread()' ... (didn't work) I've read something about a compiler options (?)... but I get lost at that point? I am surprised that it didn't work out-of-the-box, as I thought the 2.0 GB limit was ancient history as far as defaults go ... wrong again?:( My OS is 32-bit, on 32-bit hardware. The gcc version report in as: gcc (Ubuntu 4.4.3-4ubuntu5) 4.4.3 Is there a simple way around this issue? PS.. I was fascinated by the WARNINGS: section of 'info xxd' (..only on Linux ;)

    Read the article

  • Unable to list contents/remove directory (linux ext3)

    - by RedKrieg
    System is CentOS5 x86_64, completely up to date. I've got a folder that can't be listed (ls just hangs, eating memory until it is killed). The directory size is nearly 500k: root@server [/home/user/public_html/domain.com/wp-content/uploads/2010/03]# stat . File: `.' Size: 458752 Blocks: 904 IO Block: 4096 directory Device: 812h/2066d Inode: 44499071 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 3292/ user) Gid: ( 3287/ user) Access: 2012-06-29 17:31:47.000000000 -0400 Modify: 2012-10-23 14:41:58.000000000 -0400 Change: 2012-10-23 14:41:58.000000000 -0400 I can see the file names if I use ls -1f, but it just repeats the same 48 files ad infinitum, all of which have non-ascii characters somewhere in the file name: La-critic\363-al-servicio-la-privacidad-300x160.jpg When I try to access the files (say to copy them or remove them) I get messages like the following: lstat("/home/user/public_html/domain.com/wp-content/uploads/2010/03/Sebast\355an-Pi\361era-el-balc\363n-150x120.jpg", 0x7fff364c52c0) = -1 ENOENT (No such file or directory) I tried altering the code found on this man page and modified the code to call unlink for each file. I get the same ENOENT error from the unlink call: unlink("/home/user/public_html/domain.com/wp-content/uploads/2010/03/Marca-naci\363n-Madrid-150x120.jpg") = -1 ENOENT (No such file or directory) I also straced a "touch", grabbed the syscalls it makes and replicated them, then tried to unlink the resulting file by name. This works fine, but the folder still contains an entry by the same name after the operation completes and the program runs for an arbitrarily long time (strace output ended up at 20GB after 5 minutes and I stopped the process). I'm stumped on this one, I'd really prefer not to have to take this production machine (hundreds of customers) offline to fsck the filesystem, but I'm leaning toward that being the only option at this point. If anyone's had success using other methods for removing files (by inode number, I can get those with the getdents code) I'd love to hear them. (Yes, I've tried find . -inum <inode> -exec rm -fv {} \; and it still has the problem with unlink returning ENOENT) For those interested, here's the diff between that man page's code and mine. I didn't bother with error checking on mallocs, etc because I'm lazy and this is a one-off: root@server [~]# diff -u listdir-orig.c listdir.c --- listdir-orig.c 2012-10-23 15:10:02.000000000 -0400 +++ listdir.c 2012-10-23 14:59:47.000000000 -0400 @@ -6,6 +6,7 @@ #include <stdlib.h> #include <sys/stat.h> #include <sys/syscall.h> +#include <string.h> #define handle_error(msg) \ do { perror(msg); exit(EXIT_FAILURE); } while (0) @@ -17,7 +18,7 @@ char d_name[]; }; -#define BUF_SIZE 1024 +#define BUF_SIZE 1024*1024*5 int main(int argc, char *argv[]) { @@ -26,11 +27,16 @@ struct linux_dirent *d; int bpos; char d_type; + int deleted; + int file_descriptor; fd = open(argc > 1 ? argv[1] : ".", O_RDONLY | O_DIRECTORY); if (fd == -1) handle_error("open"); + char* full_path; + char* fd_path; + for ( ; ; ) { nread = syscall(SYS_getdents, fd, buf, BUF_SIZE); if (nread == -1) @@ -55,7 +61,24 @@ printf("%4d %10lld %s\n", d->d_reclen, (long long) d->d_off, (char *) d->d_name); bpos += d->d_reclen; + if ( d_type == DT_REG ) + { + full_path = malloc(strlen((char *) d->d_name) + strlen(argv[1]) + 2); //One for the /, one for the \0 + strcpy(full_path, argv[1]); + strcat(full_path, (char *) d->d_name); + + //We're going to try to "touch" the file. + //file_descriptor = open(full_path, O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666); + //fd_path = malloc(32); //Lazy, only really needs 16 + //sprintf(fd_path, "/proc/self/fd/%d", file_descriptor); + //utimes(fd_path, NULL); + //close(file_descriptor); + deleted = unlink(full_path); + if ( deleted == -1 ) printf("Error unlinking file\n"); + break; //Break on first try + } } + break; //Break on first try } exit(EXIT_SUCCESS);

    Read the article

  • Print jobs to Epson Stylus Photo 640 all stopped after upgrade 10.04 to 12.04

    - by Tessa Sayers
    Upgraded from Ubuntu 10.04 to 12.04 on 19th oct 2012. Now all print jobs end up in the print queue with the label "Stopped". Reinstalled the printer driver - it is gutenberg 5.2.8 pre1. Looking at "http://localhost:631/jobs" shows an error message by each stopped job as follows:"The PPD version (5.2.5 Simplified) is not compatible with Gutenprint 5.2.8-pre1." Found a long bug-fixing history in bugs.launchpad.net which seem to imply that this problem has been fixed. It seems to be a problem with the installation not updating the ppd files. Is there any workaround to fix this problem?

    Read the article

  • Contiguous Time Periods

    It is always more efficient to maintain referential integrity by using constraints rather than triggers. Sometimes it isn't obvious how to do this. Until a recent idea by Alex Kuznetsov, the history table presented problems for checking data that were difficult to solve with constraints. Joe Celko explains. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • what is the Best way to learn object oriented principles

    - by Mike
    I am interesed in OOP principles and i found lots of documentations and books about it, for instance in C++, Java, .NET, PHP and so on, but if i only want to learn OOP principle, differences and not in language, what can i do? i want a good documentation, not just like a whole book about something except OOP :) specific answers, how it works, main features, pictures, or videos, or forum, or even stack... in every time I began studying i read a whole history of programming, computer science, software development and bla bla... i need specific answers, if it is possible i really need to learn, i need examples and exercises thanks in advance

    Read the article

  • Is there an application or method to log of data transfers?

    - by Gaurav_Java
    My friend asked me for some files that I let him take from my system. I did not see he doing that. Then I was left with a doubt: what extra files or data did he take from my system? I was thinking is here any application or method which shows what data is copied to which USB (if name available then shows name or otherwise device id) and what data is being copied to Ubuntu machine . It is some like history of USB and System data. I think this feature exists in KDE This will really useful in may ways. It provides real time and monitoring utility to monitor USB mass storage devices activities on any machine.

    Read the article

  • Facebook likes reset after moving to HTTPS (URL manually set in script, though)

    - by aarondicks
    Hi fellow Facebook developers. I've got a question regarding the Facebook like button. We worked on a piece recently that embeds a number of social share buttons (please see the source code below or here on Harvey Water Softeners' website) When the piece was released, it was on HTTP, and received over 2k likes (the URL 'slug' hasn't changed at all). The site was recently migrated to permanent-on HTTPS, and the like data has been reset, and we've been left with 50 new, recent likes. If you see in the source code, the URL is set explicitly to like the HTTP version, which I believe to be correct. Can anyone help me work out what's happened here? Here's the HTML bit of the like button: <div class="fb-like" data-href="http://www.harveywatersofteners.co.uk/history-interior-design" data-layout="box_count" data-action="like" data-show-faces="false" data-share="false"></div> Thanks in advance Aaron

    Read the article

  • Could not write bytes: broken pipe - looking for log of removed packages

    - by user288987
    I have a dual boot system with 12.04 and windows 7. Ubuntu worked fine yesterday but this morning upon boot I get subject. Searched the forums and unsuccessful with recovery. I tried sudo gedit /var/log/apt/history.log to see the log of removed packages, but get the following... ** (gedit:976): WARNING **: Command line 'dbus-launch --autolaunch=2d7d18532e9953bc8a2b852e00000007 --binary-syntax --close-stderr' exited with non-zero exit status 1: Autolaunch error: X11 initialization failed.\n Cannot open display: Run 'gedit --help' to see a full list of available command line options. Anyone have any suggestions for a fix? Please let me know if you require any additional information. Thanks! Mark

    Read the article

  • Are "skip deltas" unique to svn?

    - by echinodermata
    The good folks who created the SVN version control system use a structure they refer to as "skip deltas" to store the revision history of files internally. A revision is stored as a delta against an earlier revision. However, revision N is not necessarily stored as a delta against revision N-1, like this: 0 <- 1 <- 2 <- 3 <- 4 <- 5 <- 6 <- 7 <- 8 <- 9 Instead, revision N is stored as a delta against N-f(N), where f(N) is the greatest power of two that divides N: 0 <- 1 2 <- 3 4 <- 5 6 <- 7 0 <------ 2 4 <------ 6 0 <---------------- 4 0 <------------------------------------ 8 <- 9 (Superficially it looks like a skip list but really it's not that similar - for instance, skip deltas are not interested in supporting insertion in the middle of the list.) You can read more about it here. My question is: Do other systems use skip deltas? Were skip deltas known/used/published before SVN, or did the creators of SVN invent it themselves?

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >