Search Results

Search found 27530 results on 1102 pages for 'write binary'.

Page 229/1102 | < Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >

  • Rule of thumb for cost vs. savings for code re-use

    - by Styler
    Is it a good rule of thumb to always write code for the intent of re-using it somewhere down the road? Or, depending on the size of the component you are writing, is it better practice to design it for re-use when it makes sense with regards to time spent on it. What is a good rule of thumb for spending extra time on analysis and design on project components that have "some probability" of being needed later down the road for other things that may or may need this part. For example, if I have the need for project X to do things A, and B. A definitely needs to be written for re-use because it just makes sense to do so. B is very project specific at the moment, and I can hack it all together in a couple days to finish the project on time and give everyone kudos for being a great team, etc. Or if we say, lets spend a whole friggin' 2 weeks figuring out what project Y/Z might need this thing for and spend a load of extra time on on part B because someday we might need to use it on project Y/Z (where the savings will be realized). I'd imagine a perfect world situation would be a nicely crafted combination of project specific vs. re-use architected components given the project. However some code shops might feel it would be a great idea to write everything for the intention of using it at some point down the road.

    Read the article

  • What to choose for a multilingual site with support for Markdown and commenting

    - by Kent
    I want to publish articles at a multilingual site. I want to be able to write an article in two languages and have them available on separate URLs: thesite.foo/english-breakfast thesite.com/engelsk-frukost If the users web browser is set to English I'd like to show a small notice at the top of the Swedish version with a link to the English one. The link should have an appropriate rel attribute for a translation (search for hreflang at http://diveintohtml5.org/semantics.html). There should be a way to list all articles belonging to these sets: Swedish only, English only, Swedish versions + English only, English versions + Swedish only. I'd like to publish these as four RSS-feeds. And I would like to have two versions of the main site, one in Swedish (showing Swedish versions + English only) and one in English (showing English versions). I shall be able to write the articles using Markdown, as that is the formatting language I find most convenient. There should be a way for users to comment. And some kind of way for me to protect myself against comment spam. I am leaning towards learning Drupal. I suspect I'll have to code this behavior myself as a module. To be frank I'd rather work with Java. Is Drupal the way to go? Or is there something more suitable for this project?

    Read the article

  • aligning truecrypt partition on 1.5TB 4kB sector drive

    - by pQd
    hi, aligning partitions to start at real physical sector of ssds / stripped raids / 4kB drives is a 'good thing to do'. but i've run into a problems when trying to do it for a truecrypt partition that will contain ext3 on it. or so it seems. when drive is question is partitioned properly and formatted with ext3 i get very reasonable write speeds around 70-80MB/s, but when i put truecrypt and ext3 on the top of it write performance becomes very unstable and goes between 1-25MB/s with very high io-wait. on the same server i dont have any performance issues with ext3 on the top of truecrypt on regular 512B-sector 500GB sata disks. so my best guess is that iowaits are caused by misalignment but i cannot really find reliable information on how to calculate optimal partition beginning. i've tried to start it at 128 logical sector, i've also tried 8132 sector as suggested here but both gave me very bad and unstable performance. do you have any experience with similar setup? thanks!

    Read the article

  • Internal hard drive, can't format

    - by user113923
    I cannot format anymore the hard drive of my laptop. Here is how I proceed: I am starting my computer with a USB live drive (Ubuntu 10.04 LTS - the Lucid Lynx). Then I start disk utility and try to format the hard drive - I choosed to format the Master boot record but I get the following error: Error creating partition table: helper exited with exit code 1: Error calling fsync(2) on /dev/sda: Input/output error If I try to delete partitions I get the following error Error erasing: helper exited with exit code 1: In part_del_partition: device_file=/dev/sda, offset=32256 Entering MS-DOS parser (offset=0, size=30005821440) MSDOS_MAGIC found looking at part 0 (offset 32256, size 4096157184, type 0x83) new part entry looking at part 1 (offset 10618836480, size 8414461440, type 0x83) new part entry looking at part 2 (offset 19033297920, size 1077511680, type 0x82) new part entry looking at part 3 (offset 20110809600, size 9895011840, type 0x07) new part entry Exiting MS-DOS parser MSDOS partition table detected got it got disk got partition - part-type=0 Error: Input/output error during write on /dev/sda ped_disk_commit_to_dev() failed If I try to install ubuntu frrom the usb on the hard drive and choose erase and use the entire disk I get the error message Input/output error during write on /dev/sda For side infos I have at the moment 4 partitions on my hard drive: /dev/sda1 (ext2) /dev/sda2 (ext2) /dev/sda3 (swap) /dev/sda1 (ntfs) + /dev/sda (unlocated Space) My ultimate goal is to reinstall ubuntu and have only 2 partitions... I would really appreciate any help here! Thanks JB

    Read the article

  • Have you used nDepend?

    - by Nick Harrison
    Have you Used NDepend? I have often wanted to use it, but never spent the money on it.   I have developed many tools that try to do pieces of what NDepend does, but never with as much success as they reach. Put simply, it is a tool that will allow you to udnerstand and monitor the architecture of your software, and it does it in some pretty amazing ways. One of the most impressive features is something that they call Code Query Language.   It allows you to write queries very similar to SQL to track the performance of various software metrics and use this to identify areas that are out of compliance with your standards and architecture. For instance, once you have analyzed your project, you can write queries such as : SELECT METHODS WHERE IsPublic AND CouldBePrivate  You can also set up such queries to provide warnings if there are records returned.    You can incorporae this into your daily build and compare build against build. There are over 82 metrics included to allow you to view your code in a variety of angles. I have often advocated for a "Code Inventory" database to track the state of software and the ROI on software investments.    This tool alone will take you about 90% of the way there. If you are not using it yet,  I strongly recommend that you do!

    Read the article

  • How can Nagios handle non-threshold based plugins?

    - by FliesLikeABrick
    I am writing a Nagios plugin to monitor trends of a certain storage resource utilization (e.g. gradual increases are fine, but an instantaneous/sudden increase or decrease in resource usage may indicate a problem). For what it's worth, it is reviewing the last N entries in an RRD file generated by a custom cacti data source/templates. What is the "right" way to handle Nagios notification config/implementation for this? The problem is that it the plugin would exit as warning/critical for one polling period, but in the next it would be fine (or 3 polling periods later, if I look at 3 polling periods worth of data). I guess the question is: should I just write it in such a way that it will alert for X polling periods, or should I find a way to write it such that manual intervention is required for it to clear (such as logging into the monitoring server or hitting a URL to run a script that submits a passive result)? Your input is appreciated, and if you have any tips for how to implement the latter I'm open to them (I can think of a few ways to possibly implement it)

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • Unmounted root partition

    - by Jack
    My server running Debian lenny has just had a power cut recently and its come back up with the root partition in read only mode. I tried to remount the filesystem in read write mode with mount -n -o remount,rw / which then gave the output mount: block device /dev/hda1 is write-protected, mounting read-only. But now the root filesystem isn't mounted at all so I can't run anything to mount the partition again or any other command for that matter such as shutdown because /bin/ isn't there. Is there anything I can do remotely?

    Read the article

  • Using PreApplicationStartMethod for ASP.NET 4.0 Application to Initialize assemblies

    - by ChrisD
    Sometimes your ASP.NET application needs to hook up some code before even the Application is started. Assemblies supports a custom attribute called PreApplicationStartMethod which can be applied to any assembly that should be loaded to your ASP.NET application, and the ASP.NET engine will call the method you specify within it before actually running any of code defined in the application. Lets discuss how to use it using Steps : 1. Add an assembly to an application and add this custom attribute to the AssemblyInfo.cs. Remember, the method you speicify for initialize should be public static void method without any argument. Lets define a method Initialize. You need to write : [assembly:PreApplicationStartMethod(typeof(MyInitializer.InitializeType), "InitializeApp")] 2. After you define this to an assembly you need to add some code inside InitializeType.InitializeApp method within the assembly. public static class InitializeType {     public static void InitializeApp()     {           // Initialize application     } } 3. You must reference this class library so that when the application starts and ASP.NET starts loading the dependent assemblies, it will call the method InitializeApp automatically. Warning Even though you can use this attribute easily, you should be aware that you can define these kind of method in all of your assemblies that you reference, but there is no guarantee in what order each of the method to be called. Hence it is recommended to define this method to be isolated and without side effect of other dependent assemblies. The method InitializeApp will be called way before the Application_start event or even before the App_code is compiled. This attribute is mainly used to write code for registering assemblies or build providers. Read Documentation I hope this post would come helpful.

    Read the article

  • How can I get started using TDD to code some simple functionality?

    - by Gabriel
    I basically have the gist of TDD. I'm sold that it's useful and I've got a reasonable command of the MSTEST framework. However, to date I have not been able to graduate to using it as a primary development method. Mostly, I use it as a surrogate for writing console apps as test drivers (my traditional approach). The most useful thing about it for me is the way it absorbs the role of regression testing. I have not yet built anything yet that specifically isolates various testable behaviors, which is another big part of the picture I know. So this question is to ask for pointers on what the first test(s) I might write for the following development task: I want to produce code that encapsulates task execution in the fashion of producer/consumer. I stopped and decided to write this question after I wrote this code (wondering if I could actually use TDD for real this time) Code: interface ITask { Guid TaskId { get; } bool IsComplete { get; } bool IsFailed { get; } bool IsRunning { get; } } interface ITaskContainer { Guid AddTask(ICommand action); } interface ICommand { string CommandName { get; } Dictionary<string, object> Parameters { get; } void Execute(); }

    Read the article

  • Refactoring and Open / Closed principle

    - by Giorgio
    I have recently being reading a web site about clean code development (I do not put a link here because it is not in English). One of the principles advertised by this site is the Open Closed Principle: each software component should be open for extension and closed for modification. E.g., when we have implemented and tested a class, we should only modify it to fix bugs or to add new functionality (e.g. new methods that do not influence the existing ones). The existing functionality and implementation should not be changed. I normally apply this principle by defining an interface I and a corresponding implementation class A. When class A has become stable (implemented and tested), I normally do not modify it too much (possibly, not at all), i.e. If new requirements arrive (e.g. performance, or a totally new implementation of the interface) that require big changes to the code, I write a new implementation B, and keep using A as long as B is not mature. When B is mature, all that is needed is to change how I is instantiated. If the new requirements suggest a change to the interface as well, I define a new interface I' and a new implementation A'. So I, A are frozen and remain the implementation for the production system as long as I' and A' are not stable enough to replace them. So, in view of these observation, I was a bit surprised that the web page then suggested the use of complex refactorings, "... because it is not possible to write code directly in its final form." Isn't there a contradiction / conflict between enforcing the Open / Closed Principle and suggesting the use of complex refactorings as a best practice? Or the idea here is that one can use complex refactorings during the development of a class A, but when that class has been tested successfully it should be frozen?

    Read the article

  • Missing Package: header, Problem with MergeList, The package lists or status file could not be parsed or opened

    - by Inbar Rose
    THIS IS NOT A DUPLICATE OF SIMILAR QUESTIONS (like this) I just had to write that first, there are tons of questions similar to this, all of them have the same redirect to an answer that does not solve my problem, because I don't have the same problem, just the same symptom. I write tests for my companies application. One of these tests tries to upgrade the application from a previous version to a new version to make sure nothing breaks. When I am installing an old version of the application, some weird stuff starts to happen. Sometimes everything goes Okay, and nothing is wrong, other times when trying to install I get this message (company app name censored): E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/XXX-amd64_Packages E: The package lists or status file could not be parsed or opened. Using the solutions provided in the questions similar to this one (like this). Do not help, and the problem keeps repeating itself once it happens the first time. This has led me to believe something is wrong on the apt server where the package is being created, but searching these errors yields no information on anything beyond the "fix" suggested in the question I linked, the only other source of information I could find also did not help (here): So I am asking for information; What is the actual problem? What causes the problem? What can fix the problem? I hope this question is in good format, if there is a problem, or missing information I can move to chat.

    Read the article

  • Distinguishing between UI command & domain commands

    - by SonOfPirate
    I am building a WPF client application using the MVVM pattern that provides an interface on top of an existing set of business logic residing in a library which is shared with other applications. The business library followed a domain-driven architecture using CQRS to separate the read and write models (no event sourcing). The combination of technologies and patterns has brought up an interesting conundrum: The MVVM pattern uses the command pattern for handling user-interaction with the view models. .NET provides an ICommand interface which is implemented by most MVVM frameworks, like MVVM Light's RelayCommand and Prism's DelegateCommand. For example, the view model would expose a number of command objects as properties that are bound to the UI and respond when the user performs actions like clicking buttons. Many implementations of the CQRS use the command pattern to isolate and encapsulate individual behaviors. In my business library, we have implemented the write model as command / command-handler pairs. As such, when we want to do some work, such as create a new order, we 'issue' a command (CreateOrderCommand) which is routed to the command-handler responsible for executing the command. This is great, clearly explained in many sources and I am good with it. However, take this scenario: I have a ToolbarViewModel which exposes a CreateNewOrderCommand property. This ICommand object is bound to a button in the UI. When clicked, the UI command creates and issues a new CreateOrderCommand object to the domain which is handled by the CreateOrderCommandHandler. This is difficult to explain to other developers and I am finding myself getting tongue-tied because everything is a command. I'm sure I'm not the first developer to have patterns overlap like this where the naming/terminology also overlap. How have you approached distinguishing your commands used in the UI from those used in the domain? (Edit: I should mention that the business library is UI-agnostic, i.e. no UI technology-specific code exists, or will exists, in this library.)

    Read the article

  • Am I wrong to disagree with A Gentle Introduction to symfony's template best practices?

    - by AndrewKS
    I am currently learning symfony and going through the book A Gentle Introduction to symfony and came across this section in "Chapter 4: The Basics of Page Creation" on creating templates (or views): "If you need to execute some PHP code in the template, you should avoid using the usual PHP syntax, as shown in Listing 4-4. Instead, write your templates using the PHP alternative syntax, as shown in Listing 4-5, to keep the code understandable for non-PHP programmers." Listing 4-4 - The Usual PHP Syntax, Good for Actions, But Bad for Templates <p>Hello, world!</p> <?php if ($test) { echo "<p>".time()."</p>"; } ?> (The ironic thing about this is that the echo statement would look even better if time was a variable declared in the controller because then you could just embed the variable in the string instead of concatenating) Listing 4-5 - The Alternative PHP Syntax, Good for Templates <p>Hello, world!</p> <?php if ($test): ?> <p><?php echo time(); ?> </p><?php endif; ?> I fail to see how listing 4-5 makes the code "understandable for non-PHP programmers", and its readability is shaky at best. 4-4 looks much more readable to me. Are there any programmers who are using symfony that write their templates like those in 4-4 rather than 4-5? Are there reasons I should use one over the other? There is the very slim chance that somewhere down the road someone less technical could be editing it the template, but how does 4-5 actually make it more understandable to them?

    Read the article

  • Disable spell checking in Internet Explorer 10 (Windows 8)

    - by Lumi
    I installed the release preview of Windows 8, which comes with Internet Explorer 10. IE10 has been endowed with a spell checking Feature, which is active while I write this text. As a matter of principle, I disable spell checking anywhere I go. I simply hate spell checking. I know better than the spell checker. (And when not it doesn't matter.) This is all the more annoying as I write in several languages and the spell checker only does one language (German, in this case), and then even in the new and wrong spelling, not in the old an true one. How can I disable this?

    Read the article

  • ODI 11g - Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • CentOS - Configuring Puppet to play nice with SELinux

    - by Mike Purcell
    I am running into an issue every time I attempt to start the puppetmasterd service, for which I receive the following error message: root@service1 ~ # -> /etc/init.d/puppetmaster start Starting puppetmaster: Could not prepare for execution: Got 1 failure(s) while initializing: change from absent to directory failed: Could not set 'directory on ensure: Permission denied - /etc/puppet/ssl [FAILED] Apparently there was a known issue with this scenario as outlined in this bug report, however in the bug report it states the issue has been resolved in selinux-policy-3.9.16-29.fc15, but the latest CentOS default upstream version is 3.7.19-155.el6_3.4. So I am trying to figure out the best solution. I can either create a local security policy to allow puppetmasterd the access it needs, or keep researching and install a newer version of selinux-policy outside of the default upstream channel. Anyone have any recommendations? Please don't recommend disabling SELinux... ----- Update ----- Here is the puppet.conf: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl [master] certname=puppetmaster.ownij.lan dns_alt_names=puppetmaster.ownij.lan [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig server=puppetmaster.ownij.lan And here are the denials per the audit log: type=AVC msg=audit(1349751364.985:666): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751364.985:666): arch=c000003e syscall=4 success=no exit=-13 a0=1391420 a1=7fffef09ed10 a2=7fffef09ed10 a3=120c500 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.302:667): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.302:667): arch=c000003e syscall=4 success=no exit=-13 a0=1d18530 a1=7fffef0d04d0 a2=7fffef0d04d0 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.465:668): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.465:668): arch=c000003e syscall=4 success=no exit=-13 a0=1af3930 a1=7fffef0c5c70 a2=7fffef0c5c70 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.467:669): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.467:669): arch=c000003e syscall=4 success=no exit=-13 a0=1b17aa0 a1=7fffef0c5c70 a2=7fffef0c5c70 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751366.401:670): avc: denied { write } for pid=15093 comm="puppetmasterd" name="puppet" dev=dm-0 ino=132035 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:puppet_etc_t:s0 tclass=dir type=SYSCALL msg=audit(1349751366.401:670): arch=c000003e syscall=83 success=no exit=-13 a0=2d7a400 a1=1f9 a2=2d7a40f a3=7fffef0a6df0 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) And the audit log if I pass through audit2allow: root@service1 ~ # -> fgrep puppetmasterd /var/log/audit/audit.log | audit2allow -m puppetmasterd module puppetmasterd 1.0; require { type home_root_t; type puppetmaster_t; type puppet_etc_t; type puppet_var_run_t; type httpd_sys_content_t; class lnk_file { relabelfrom relabelto }; class file { relabelfrom read getattr open }; class dir { write read search getattr setattr }; } #============= puppetmaster_t ============== allow puppetmaster_t home_root_t:dir { search getattr }; allow puppetmaster_t httpd_sys_content_t:dir read; allow puppetmaster_t httpd_sys_content_t:file { read getattr open }; #!!!! The source type 'puppetmaster_t' can write to a 'dir' of the following types: # puppet_log_t, puppet_var_lib_t, puppet_var_run_t, puppetmaster_tmp_t allow puppetmaster_t puppet_etc_t:dir { write setattr }; allow puppetmaster_t puppet_etc_t:lnk_file { relabelfrom relabelto }; allow puppetmaster_t puppet_var_run_t:file relabelfrom;

    Read the article

  • Unable to delete all partitions on flash drive using Windows 7 OS??

    - by irrational John
    Recently I purchased an ADATA C802 8GB flash drive. Since the drive was new I decided to run some of the HD Tune Pro (v4.50) performance tests on it, mostly just for the heck of it. To avoid accidently destroying data HD Tune refuses to write to a drive unless there are no partitions on the drive. If you do attempt to write to a drive with partitions, it posts the message "Writing is disabled. To enable writing please remove all partitions." As you would expect, the ADATA came formatted with a single primary FAT32 partition in the Master Boot Record. But a number of unexpected things happened when I attempted to delete that partition. The first thing I tried was to use the Windows 7 (64-bit) Disk Management tool (diskmgmt.msc) to delete the partition. It would not let me. The context menu choice to delete that volume was not available. Next I opened up a command prompt window with Admin authority and ran diskpart. Diskpart deleted the volume for me. However, when I attempted to run an HD Tune write test on the drive I still got the "Writing is disabled" message. Huh??? So I fired up a utility I have which allows viewing drives at the sector level and verified that the partition table in the Master Boot Record was empty. No partitions. Yet HD Tune still thought there were partitions on the drive? So why was I still getting the "Writing is disabled" message from HD Tune Pro? And why wouldn't the Windows 7 Disk Management tool let me change the partitions on this drive. After doing the above, I plugged the ADATA into my MacBook. I was then able to format it as either a GPT or MBR partitioned drive with no problems. I am not looking for suggestions on how to format this drive. I can do that. What I do not understand and was hoping I might get insight into is why this drive behaves so strangely under Windows 7? And BTW, what's up with HD Tune Pro? BTW, if I plug the drive I formatted on my MacBook back into my Windows 7 64-bit system I still run into road blocks with the Disk Management tool. For example, I cannot delete all the GPT partitions on the ADATA so I can convert it into an MBR drive. I following Microsoft's instructions, the instructions just do not work with this ADATA flash drive. Anyone know what's up with this? It makes no sense to me. Has something changed in Windows 7 (Vista)??

    Read the article

  • Prepared statement alternatives for this middle-man program?

    - by user2813274
    I have an program that is using a prepared statement to connect and write to a database working nicely, and now need to create a middle-man program to insert between this program and the database. This middle-man program will actually write to multiple databases and handle any errors and connection issues. I would like advice as to how to replicate the prepared statements such as to create minimal impact to the existing program, however I am not sure where to start. I have thought about creating a "SQL statement class" that mimics the prepared statement, only that seems silly. The existing program is in Java, although it's going to be networked anyways so I would be open to writing it in just about anything that would make sense. The databases are currently MySQL, although I would like to be open to changing the database type in the future. My main question is what should the interface for this program look like, and does doing this even make sense? A distributed DB would be the ideal solution, but they seem overly complex and expensive for my needs. I am hoping to replicate the main functionality of a distributed DB via this middle-man. I am not too familiar with sql-based servers distributing data (or database in general...) - perhaps I am fighting an uphill battle by trying to solve it via programming, but I would like to make an attempt at least.

    Read the article

  • Config postGreSQL pg_hba.conf restric role access

    - by Mathias
    Hello postgre experts. I am completely new to the game but need the following: I Create a new role with login. Let's say: User1 I then create a Database 'User1Database' and set User1 as the owner. User1 has no rights to do anything except for access. Now when I connect using User1 it somehow has access to all databases. I then learned I neeed to write something in here. User1 should have global access to User1Database and absolutely no access to anything else. What lines do I need to add to my pg_hba file? Currently it looks like this: # IPv4 local connections: host all all 127.0.0.1/32 md5 # IPv6 local connections: host all all ::1/128 md5 host all all 0.0.0.0/0 md5 Hope someone can write me the exact lines and explain them to me.

    Read the article

  • perfmon reporting higher IOPs than possible?

    - by BlueToast
    We created a monitoring report for IOPs on performance counters using Disk reads/sec and Disk writes/sec on four servers (physical boxes, no virtualization) that have 4x 15k 146GB SAS drives in RAID10 per server, set to check and record data every 1 second, and logged for 24 hours before stopping reports. These are the results we got: Server1 Maximum disk reads/sec: 4249.437 Maximum disk writes/sec: 4178.946 Server2 Maximum disk reads/sec: 2550.140 Maximum disk writes/sec: 5177.821 Server3 Maximum disk reads/sec: 1903.300 Maximum disk writes/sec: 5299.036 Server4 Maximum disk reads/sec: 8453.572 Maximum disk writes/sec: 11584.653 The average disk reads and writes per second were generally low. I.e. for one particular server it was like average 33 writes/sec, but when monitoring in real-time it would often spike up to several hundreds and also sometimes into the thousands. Could someone explain to me why these numbers are significantly higher than theoretical calculations assuming each drive can do 180 IOPs? Additional details (RAID card): HP Smart Array P410i, Total cache size of 1GB, Write cache is disabled, Array accelerator cache ratio is 25% read and 75% write

    Read the article

  • Misconceptions about purely functional languages?

    - by Giorgio
    I often encounter the following statements / arguments: Pure functional programming languages do not allow side effects (and are therefore of little use in practice because any useful program does have side effects, e.g. when it interacts with the external world). Pure functional programming languages do not allow to write a program that maintains state (which makes programming very awkward because in many application you do need state). I am not an expert in functional languages but here is what I have understood about these topics until now. Regarding point 1, you can interact with the environment in purely functional languages but you have to explicitly mark the code (functions) that introduces them (e.g. in Haskell by means of monadic types). Also, AFAIK computing by side effects (destructively updating data) should also be possible (using monadic types?) but is not the preferred way of working. Regarding point 2, AFAIK you can represent state by threading values through several computation steps (in Haskell, again, using monadic types) but I have no practical experience doing this and my understanding is rather vague. So, are the two statements above correct in any sense or are they just misconceptions about purely functional languages? If they are misconceptions, how did they come about? Could you write a (possibly small) code snippet illustrating the Haskell idiomatic way to (1) implement side effects and (2) implement a computation with state?

    Read the article

  • Is there a better way to consume an ASP.NET Web API call in an MVC controller?

    - by davidisawesome
    In a new project I am creating for my work I am creating a fairly large ASP.NET Web API. The api will be in a separate visual studio solution that also contains all of my business logic and database interactions, Model classes as well. In the test application I am creating (which is asp.net mvc4), I want to be able to hit an api url I defined from the control and cast the return JSON to a Model class. The reason behind this is that I want to take advantage of strongly typing my views to a Model. This is all still in a proof of concept stage, so I have not done any performance testing on it, but I am curious if what I am doing is a good practice, or if I am crazy for even going down this route. Here is the code on the client controller: public class HomeController : Controller { protected string dashboardUrlBase = "http://localhost/webapi/api/StudentDashboard/"; public ActionResult Index() //This view is strongly typed against User { //testing against Joe Bob string adSAMName = "jBob"; WebClient client = new WebClient(); string url = dashboardUrlBase + "GetUserRecord?userName=" + adSAMName; //'User' is a Model class that I have defined. User result = JsonConvert.DeserializeObject<User>(client.DownloadString(url)); return View(result); } . . . } If I choose to go this route another thing to note is I am loading several partial views in this page (as I will also do in subsequent pages). The partial views are loaded via an $.ajax call that hits this controller and does basically the same thing as the code above: Instantiate a new WebClient Define the Url to hit Deserialize the result and cast it to a Model Class. So it is possible (and likely) I could be performing the same actions 4-5 times for a single page. Is there a better method to do this that will: Let me keep strongly typed views. Do my work on the server rather than on the client (this is just a preference since I can write C# faster than I can write javascript).

    Read the article

  • C# Simple Twitter Update

    - by mroberts
    For what it's worth a simple twitter update. using System; using System.IO; using System.Net; using System.Text; namespace Server.Actions { public class TwitterUpdate { public string Body { get; set; } public string Login { get; set; } public string Password { get; set; } public override void Execute() { try { //encode user name and password string creds = Convert.ToBase64String(Encoding.ASCII.GetBytes(string.Format("{0}:{1}", this.Login, this.Password))); //encode tweet byte[] tweet = Encoding.ASCII.GetBytes("status=" + this.Body); //setup request HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://twitter.com/statuses/update.xml"); request.Method = "POST"; request.ServicePoint.Expect100Continue = false; request.Headers.Add("Authorization", string.Format("Basic {0}", creds)); request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = tweet.Length; //write to stream Stream reqStream = request.GetRequestStream(); reqStream.Write(tweet, 0, tweet.Length); reqStream.Close(); //check response HttpWebResponse response = (HttpWebResponse)request.GetResponse(); //... } catch (Exception e) { //... } } } }

    Read the article

  • Can Clojure's thread-based agents handle c10k performance?

    - by elliot42
    I'm writing a c10k-style service and am trying to evaluate Clojure's performance. Can Clojure agents handle this scale of concurrency with its thread-based agents? Other high performance systems seem to be moving towards async-IO/events/greenlets, albeit at a seemingly higher complexity cost. Suppose there are 10,000 clients connected, sending messages that should be appended to 1,000 local files--the Clojure service is trying to write to as many files in parallel as it can, while not letting any two separate requests mangle the same single file by writing at the same time. Clojure agents are extremely elegant conceptually--they would allow separate files to be written independently and asynchronously, while serializing (in the database sense) multiple requests to write to the same file. My understanding is that agents work by starting a thread for each operation (assume we are IO-bound and using send-off)--so in this case is it correct that it would start 1,000+ threads? Can current-day systems handle this number of threads efficiently? Most of them should be IO-bound and sleeping most of the time, but I presume there would still be a context-switching penalty that is theoretically higher than async-IO/event-based systems (e.g. Erlang, Go, node.js). If the Clojure solution can handle the performance, it seems like the most elegant thing to code. However if it can't handle the performance then something like Erlang or Go's lightweight processes might be preferable, since they are designed to have tens of thousands of them spawned at once, and are only moderately more complex to implement. Has anyone approached this problem in Clojure or compared to these other platforms? (Thanks for your thoughts!)

    Read the article

< Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >