Search Results

Search found 15952 results on 639 pages for 'assembly language'.

Page 91/639 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • How to change the language of driver interface for Canon Pixma printers?

    - by Sammy
    Is there a way to change the language of the driver interface for Canon Pixma printers? Which language is used seems to be determined by the language of the OS or the Windows localization settings. I really don't want that, I want to be able to set the language manually to my own liking, either during the driver installation or afterwards. I have found a workaround for Pixma IP2770 where you edit the setup.ini file by replacing the language names and the DLL search paths with <SELECT> under the LANGUAGES section. So instead of... 0000=<SELECT> 0001=Arabic,RES\STRING\IJInstAR.ini,RES\DLL\IJInstAR.dll 0804=Simplified Chinese,RES\STRING\IJInstCN.ini,RES\DLL\IJInstCN.dll 0404=Traditional Chinese,RES\STRING\IJInstTW.ini,RES\DLL\IJInstTW.dll 0005=Czech,RES\STRING\IJInstCZ.ini,RES\DLL\IJInstCZ.dll 0006=Danish,RES\STRING\IJInstDK.ini,RES\DLL\IJInstDK.dll 0007=German,RES\STRING\IJInstDE.ini,RES\DLL\IJInstDE.dll 0008=Greek,RES\STRING\IJInstGR.ini,RES\DLL\IJInstGR.dll 0009=English,RES\STRING\IJInstUS.ini,RES\DLL\IJInstUS.dll 000A=Spanish,RES\STRING\IJInstES.ini,RES\DLL\IJInstES.dll 000B=Finnish,RES\STRING\IJInstFI.ini,RES\DLL\IJInstFI.dll 000C=French,RES\STRING\IJInstFR.ini,RES\DLL\IJInstFR.dll 000E=Hungarian,RES\STRING\IJInstHU.ini,RES\DLL\IJInstHU.dll 0010=Italian,RES\STRING\IJInstIT.ini,RES\DLL\IJInstIT.dll 0011=Japanese,RES\STRING\IJInstJP.ini,RES\DLL\IJInstJP.dll 0012=Korean,RES\STRING\IJInstKR.ini,RES\DLL\IJInstKR.dll 0013=Dutch,RES\STRING\IJInstNL.ini,RES\DLL\IJInstNL.dll 0014=Norwegian,RES\STRING\IJInstNO.ini,RES\DLL\IJInstNO.dll 0015=Polish,RES\STRING\IJInstPL.ini,RES\DLL\IJInstPL.dll 0016=Portuguese,RES\STRING\IJInstPT.ini,RES\DLL\IJInstPT.dll 0019=Russian,RES\STRING\IJInstRU.ini,RES\DLL\IJInstRU.dll 001D=Swedish,RES\STRING\IJInstSE.ini,RES\DLL\IJInstSE.dll 001E=Thai,RES\STRING\IJInstTH.ini,RES\DLL\IJInstTH.dll 001F=Turkish,RES\STRING\IJInstTR.ini,RES\DLL\IJInstTR.dll 0021=Indonesian,RES\STRING\IJInstID.ini,RES\DLL\IJInstID.dll You get.... 0000=<SELECT> 0001=<SELECT> 0804=<SELECT> 0404=<SELECT> 0005=<SELECT> 0006=<SELECT> 0007=<SELECT> 0008=<SELECT> 0009=English,RES\STRING\IJInstUS.ini,RES\DLL\IJInstUS.dll 000A=<SELECT> 000B=<SELECT> 000C=<SELECT> 000E=<SELECT> 0010=<SELECT> 0011=<SELECT> 0012=<SELECT> 0013=<SELECT> 0014=<SELECT> 0015=<SELECT> 0016=<SELECT> 0019=<SELECT> 001D=<SELECT> 001E=<SELECT> 001F=<SELECT> 0021=<SELECT> .... in case English is the preferred language. It's a way to force the installation program to only install the English language support. IP2770 is a model for the Asian market, so if you want to check this out you need to go to the Canon India download page (for instance) to get the driver. Unfortunately this method is not possible with my IP4000. There is no driver even available for it to download for Windows Vista. But is there really no way of changing the language of the UI in any normal way, you know... without having to hack it? Besides, the driver for my printer comes with Windows Vista, so I don't even have to install any drivers. And little do I get the chance to set the language, knowing that the installation never happens. Any ideas?...

    Read the article

  • InternalsVisibleTo attribute and security vulnerability

    - by Sergey Litvinov
    I found one issue with InternalsVisibleTo attribute usage. The idea of InternalsVisibleTo attribute to allow some other assemblies to use internal classes\methods of this assembly. To make it work you need sign your assemblies. So, if other assemblies isn't specified in main assembly and if they have incorrect public key, then they can't use Internal members. But the issue in Reflection Emit type generation. For example, we have CorpLibrary1 assembly and it has such class: public class TestApi { internal virtual void DoSomething() { Console.WriteLine("Base DoSomething"); } public void DoApiTest() { // some internal logic // ... // call internal method DoSomething(); } } This assembly is marked with such attribute to allow another CorpLibrary2 to make inheritor for that TestAPI and override behaviour of DoSomething method. [assembly: InternalsVisibleTo("CorpLibrary2, PublicKey=0024000004800000940000000602000000240000525341310004000001000100434D9C5E1F9055BF7970B0C106AAA447271ECE0F8FC56F6AF3A906353F0B848A8346DC13C42A6530B4ED2E6CB8A1E56278E664E61C0D633A6F58643A7B8448CB0B15E31218FB8FE17F63906D3BF7E20B9D1A9F7B1C8CD11877C0AF079D454C21F24D5A85A8765395E5CC5252F0BE85CFEB65896EC69FCC75201E09795AAA07D0")] The issue is that I'm able to override this internal DoSomething method and break class logic. My steps to do it: Generate new assembly in runtime via AssemblyBuilder Get AssemblyName from CorpLibrary1 and copy PublikKey to new assembly Generate new assembly that will inherit TestApi class As PublicKey and name of generated assembly is the same as in InternalsVisibleTo, then we can generate new DoSomething method that will override internal method in TestAPI assembly Then we have another assembly that isn't related to this CorpLibrary1 and can't use internal members. We have such test code in it: class Program { static void Main(string[] args) { var builder = new FakeBuilder(InjectBadCode, "DoSomething", true); TestApi fakeType = builder.CreateFake(); fakeType.DoApiTest(); // it will display: // Inject bad code // Base DoSomething Console.ReadLine(); } public static void InjectBadCode() { Console.WriteLine("Inject bad code"); } } And this FakeBuilder class has such code: /// /// Builder that will generate inheritor for specified assembly and will overload specified internal virtual method /// /// Target type public class FakeBuilder { private readonly Action _callback; private readonly Type _targetType; private readonly string _targetMethodName; private readonly string _slotName; private readonly bool _callBaseMethod; public FakeBuilder(Action callback, string targetMethodName, bool callBaseMethod) { int randomId = new Random((int)DateTime.Now.Ticks).Next(); _slotName = string.Format("FakeSlot_{0}", randomId); _callback = callback; _targetType = typeof(TFakeType); _targetMethodName = targetMethodName; _callBaseMethod = callBaseMethod; } public TFakeType CreateFake() { // as CorpLibrary1 can't use code from unreferences assemblies, we need to store this Action somewhere. // And Thread is not bad place for that. It's not the best place as it won't work in multithread application, but it's just a sample LocalDataStoreSlot slot = Thread.AllocateNamedDataSlot(_slotName); Thread.SetData(slot, _callback); // then we generate new assembly with the same nameand public key as target assembly trusts by InternalsVisibleTo attribute var newTypeName = _targetType.Name + "Fake"; var targetAssembly = Assembly.GetAssembly(_targetType); AssemblyName an = new AssemblyName(); an.Name = GetFakeAssemblyName(targetAssembly); // copying public key to new generated assembly var assemblyName = targetAssembly.GetName(); an.SetPublicKey(assemblyName.GetPublicKey()); an.SetPublicKeyToken(assemblyName.GetPublicKeyToken()); AssemblyBuilder assemblyBuilder = Thread.GetDomain().DefineDynamicAssembly(an, AssemblyBuilderAccess.RunAndSave); ModuleBuilder moduleBuilder = assemblyBuilder.DefineDynamicModule(assemblyBuilder.GetName().Name, true); // create inheritor for specified type TypeBuilder typeBuilder = moduleBuilder.DefineType(newTypeName, TypeAttributes.Public | TypeAttributes.Class, _targetType); // LambdaExpression.CompileToMethod can be used only with static methods, so we need to create another method that will call our Inject method // we can do the same via ILGenerator, but expression trees are more easy to use MethodInfo methodInfo = CreateMethodInfo(moduleBuilder); MethodBuilder methodBuilder = typeBuilder.DefineMethod(_targetMethodName, MethodAttributes.Public | MethodAttributes.Virtual); ILGenerator ilGenerator = methodBuilder.GetILGenerator(); // call our static method that will call inject method ilGenerator.EmitCall(OpCodes.Call, methodInfo, null); // in case if we need, then we put call to base method if (_callBaseMethod) { var baseMethodInfo = _targetType.GetMethod(_targetMethodName, BindingFlags.NonPublic | BindingFlags.Instance); // place this to stack ilGenerator.Emit(OpCodes.Ldarg_0); // call the base method ilGenerator.EmitCall(OpCodes.Call, baseMethodInfo, new Type[0]); // return ilGenerator.Emit(OpCodes.Ret); } // generate type, create it and return to caller Type cheatType = typeBuilder.CreateType(); object type = Activator.CreateInstance(cheatType); return (TFakeType)type; } /// /// Get name of assembly from InternalsVisibleTo AssemblyName /// private static string GetFakeAssemblyName(Assembly assembly) { var internalsVisibleAttr = assembly.GetCustomAttributes(typeof(InternalsVisibleToAttribute), true).FirstOrDefault() as InternalsVisibleToAttribute; if (internalsVisibleAttr == null) { throw new InvalidOperationException("Assembly hasn't InternalVisibleTo attribute"); } var ind = internalsVisibleAttr.AssemblyName.IndexOf(","); var name = internalsVisibleAttr.AssemblyName.Substring(0, ind); return name; } /// /// Generate such code: /// ((Action)Thread.GetData(Thread.GetNamedDataSlot(_slotName))).Invoke(); /// private LambdaExpression MakeStaticExpressionMethod() { var allocateMethod = typeof(Thread).GetMethod("GetNamedDataSlot", BindingFlags.Static | BindingFlags.Public); var getDataMethod = typeof(Thread).GetMethod("GetData", BindingFlags.Static | BindingFlags.Public); var call = Expression.Call(allocateMethod, Expression.Constant(_slotName)); var getCall = Expression.Call(getDataMethod, call); var convCall = Expression.Convert(getCall, typeof(Action)); var invokExpr = Expression.Invoke(convCall); var lambda = Expression.Lambda(invokExpr); return lambda; } /// /// Generate static class with one static function that will execute Action from Thread NamedDataSlot /// private MethodInfo CreateMethodInfo(ModuleBuilder moduleBuilder) { var methodName = "_StaticTestMethod_" + _slotName; var className = "_StaticClass_" + _slotName; TypeBuilder typeBuilder = moduleBuilder.DefineType(className, TypeAttributes.Public | TypeAttributes.Class); MethodBuilder methodBuilder = typeBuilder.DefineMethod(methodName, MethodAttributes.Static | MethodAttributes.Public); LambdaExpression expression = MakeStaticExpressionMethod(); expression.CompileToMethod(methodBuilder); var type = typeBuilder.CreateType(); return type.GetMethod(methodName, BindingFlags.Static | BindingFlags.Public); } } remarks about sample: as we need to execute code from another assembly, CorpLibrary1 hasn't access to it, so we need to store this delegate somewhere. Just for testing I stored it in Thread NamedDataSlot. It won't work in multithreaded applications, but it's just a sample. I know that we use Reflection to get private\internal members of any class, but within reflection we can't override them. But this issue is allows anyone to override internal class\method if that assembly has InternalsVisibleTo attribute. I tested it on .Net 3.5\4 and it works for both of them. How does it possible to just copy PublicKey without private key and use it in runtime? The whole sample can be found there - https://github.com/sergey-litvinov/Tests_InternalsVisibleTo UPDATE1: That test code in Program and FakeBuilder classes hasn't access to key.sn file and that library isn't signed, so it hasn't public key at all. It just copying it from CorpLibrary1 by using Reflection.Emit

    Read the article

  • How To Create Your Own x86 Operating System for Modern PC Computers

    - by mudge
    I'd like to create a new operating system for x86 PC computers. I'd like it to be 64-bit but possibly run as 32-bit as well. I have these kinds of questions: What kinds of things do you start working on first? Knowing where to start in writing your own operating system seems to me to be a tricky subject, so I am interested in your input. Generally how to go about making your own 32-bit/64-bit operating system, or good resources that mention useful information about going about writing your own operating system for x86 computers. I don't care how old sources are as long as they are still relevant and useful to what I am doing. I know that I will want it to have kernel drivers that access peripheral hardware directly. Where should I look for advice and documentation for programming and understanding the interface to peripheral hardware the operating system will communicate with? I will need to understand how the operating system will receive input and interact with keyboards, mice, computer monitors, hard drives, USB, etc. etc. This is probably the area I know least about. I have the Intel instruction set manuals and have been getting more familiar with assembly programming, so the CPU side of things is what I know the most about. At this point I'm thinking that I'd like to implement the Linux system calls within my operating system so that programs that run on Linux can run on my operating system. I want my operating system to use the ELF binary format. I wonder what obstacles I have to overcome to achieve this Linux compatibility. Are the main things implementing the system calls that Linux provides, and using the ELF format? What else? I am also interested in people's thoughts about why it might not be a good idea to make your own operating system, and why it is a good idea to make your own operating system. Thank you for any input.

    Read the article

  • .NET Reference "Copy Local" True / False Being Set Based on Contents of GAC

    - by D-Sect
    We had a very interesting problem with a Win Forms project. It's been resolved. We know what happened, but we want to understand why it happened. This may help other people out in the future who have a similar problem. The WinForms project failed on 2 of our client's PCs. The error was an obscure kernel.dll error. The project ran fine on 3 other PCs. We found that a .DLL (log4net.dll - a very popular open-source logging library) was missing from our release folder. It was previously in our release folder. Why was it missing in this latest release? It was missing because I must have installed a program on my Dev box that used log4net.dll and it was added to the Global Assembly Cache. When I checked the solution's references for log4net.dll, they were changed to "copy local=FALSE". They must have changed automatically because log4net.dll was present in my GAC. Here's where my question starts: Why did my reference for log4net.dll get changed from COPY LOCAL = TRUE to COPY LOCAL = FALSE? I suspect it's because it was added to my GAC by another program. How can we prevent this from happening again? As it stands now, if I install a piece of software that uses a common library and it adds it to my GAC, then my SLNs that reference that DLL will change from Copy Local TRUE to FALSE.

    Read the article

  • Which Computer Organization & Architecture book is good for me?

    - by claws
    I'm always interested in learning the inner working of things. I started with C programming and then learnt Operating systems (from stallings) and then linkers & loaders and then assembly language after reading these now I want to go into little more depth. Computer Architecture. I feel that makes everything clear. As per SO archives these are the two good books: Computer Architecture: A Quantitative Approach, 4th Edition Computer Organization and Design, Fourth Edition, ~ David A. Patterson, John L. Hennessy But I've browsed through the contents of these books and found that they don't exactly meet my needs. I want to learn more about caches, Memory Management Unit , mapping b/w virtual memory & physical memory I'm no way interested in other ISAs like MIPS etc.. I'm IA32 and X86-64 fan and I want to stick to it. I'm not a hardware developer I don't want to details like circuit diagrams or How is L1, L2 & L3 caches are implemented? I want to know the parallel processing technologies like HyperThreading at the architecture level but again I don't want to design them. I liked the table of Contents of - Computer Architecture: A Quantitative Approach, 4th Edition but Quantitave Approach? Seriously?? I want to know the details of current technologies and I dont want to spend reading 200 pages of outdated old technologies ( I experienced this while learning ASM}

    Read the article

  • P6 Architecture - Register renaming aside, does the limited user registers result in more ops spent

    - by mrjoltcola
    I'm studying JIT design with regard to dynamic languages VM implementation. I haven't done much Assembly since the 8086/8088 days, just a little here or there, so be nice if I'm out of sorts. As I understand it, the x86 (IA-32) architecture still has the same basic limited register set today that it always did, but the internal register count has grown tremendously, but these internal registers are not generally available and are used with register renaming to achieve parallel pipelining of code that otherwise could not be parallelizable. I understand this optimization pretty well, but my feeling is, while these optimizations help in overall throughput and for parallel algorithms, the limited register set we are still stuck with results in more register spilling overhead such that if x86 had double, or quadruple the registers available to us, there may be significantly less push/pop opcodes in a typical instruction stream? Or are there other processor optmizations that also optimize this away that I am unaware of? Basically if I've a unit of code that has 4 registers to work with for integer work, but my unit has a dozen variables, I've got potentially a push/pop for every 2 or so instructions. Any references to studies, or better yet, personal experiences?

    Read the article

  • SAL and SAR by 0 errors

    - by Roy McAvoy
    I have discovered a bug in some assembly code I have been working with but can't figure how to fix it. When shifting left by 0 the result ends up being 0 instead of jut the number. The same applies when shifting to the right. Any and all help is much appreciated. function sal(n,k:integer):integer; begin asm cld mov cx, k @1: sal n, 1 loop @1 end; sal:= n; end; function sar(n,k:integer):integer; begin asm cld mov cx, k @1: sar n, 1 loop @1 end; sar:=n; end; I have tried to changed them in the following way and it still does not work properly. function sal(n,k:integer):integer; begin asm cld mov cx, k jcxz @done @1: sal n, 1 loop @1 @done: end; sal:= n; end; function sar(n,k:integer):integer; begin asm cld mov cx, k jcxz @done @1: sar n, 1 loop @1 @done: end; sar:=n; end;

    Read the article

  • Read from file hexadecimal number and change their representation style

    - by user576844
    I want to write a program changing the notation of all hexadecimal numbers found in an assembly source file from traditional (h) to C-style (0x). I have started the coding part but am not sure how can I detect the hexadecimal numbers and eventually change the style and save it back in the file... I have started writing the program.. ## Mips program - .data fin: .ascii "" # filename for input msg0: .asciiz "aaaa" msg1: .asciiz "Please enter the input file name:" buffer: .asciiz "" .text #----------------------- li $v0, 4 la $a0, msg1 syscall li $v0, 8 la $a0, fin li $a1, 21 syscall jal fileRead #read from file move $s1, $v0 #$t0 = total number of bytes li $t0, 0 # Loop counter loop: bge $t0, $s1, end #if end of file reached OR if there is an error in the file lb $t5, buffer($t0) #load next byte from file jal checkhexa #check for hexadecimal numbers addi $t0, $t0, 1 #increment loop counter j loop end: jal output jal fileClose li $v0, 10 syscall fileRead: # Open file for reading li $v0, 13 # system call for open file la $a0, fin # input file name li $a1, 0 # flag for reading li $a2, 0 # mode is ignored syscall # open a file move $s0, $v0 # save the file descriptor # reading from file just opened li $v0, 14 # system call for reading from file move $a0, $s0 # file descriptor la $a1, buffer # address of buffer from which to read li $a2, 100000 # hardcoded buffer length syscall # read from file jr $ra Any help would be appreciated.

    Read the article

  • How can I know what this does?

    - by Dabor Troppe
    I got this piece of Assembly code extracted from some piece of software, but unfortunately I don't know anything of assembler and the bits I touched of Assembler was back in the Commodore Amiga with the 68000. Can anybody guide me on how I could understand this code without me needing to learn assembler from scratch, or just tell me what it does? Is there any kind of "Simulator" out there that I can run this on to see what it does? -[ObjSample Param1:andParam2:]: 00000c79 pushl %ebp 00000c7a movl %esp,%ebp 00000c7c subl $0x48,%esp 00000c7f movl %ebx,0xf4(%ebp) 00000c82 movl %esi,0xf8(%ebp) 00000c85 movl %edi,0xfc(%ebp) 00000c88 calll 0x00000c8d 00000c8d popl %ebx 00000c8e cmpb $-[ObjSample delegate],_bDoOnce.26952-0xc8d(%ebx) 00000c95 jel 0x00000d47 00000c9b movb $-[ObjSample delegate],_bDoOnce.26952-0xc8d(%ebx) 00000ca2 movl 0x7dc0-0xc8d(%ebx),%eax 00000ca8 movl %eax,0x04(%esp) 00000cac movl 0x7df4-0xc8d(%ebx),%eax 00000cb2 movl %eax,(%esp) 00000cb5 calll _objc_msgSend 00000cba movl 0x7dbc-0xc8d(%ebx),%edx 00000cc0 movl %edx,0x04(%esp) 00000cc4 movl %eax,(%esp) 00000cc7 calll _objc_msgSend 00000ccc movl %eax,0xe4(%ebp) 00000ccf movl 0x7db8-0xc8d(%ebx),%eax 00000cd5 movl %eax,0x04(%esp) 00000cd9 movl 0xe4(%ebp),%eax 00000cdc movl %eax,(%esp) 00000cdf calll _objc_msgSend 00000ce4 leal (%eax,%eax),%edi 00000ce7 movl %edi,(%esp) 00000cea calll _malloc 00000cef movl %eax,%esi 00000cf1 movl %edi,0x08(%esp) 00000cf5 movl $-[ObjSample delegate],0x04(%esp) 00000cfd movl %eax,(%esp) 00000d00 calll _memset 00000d05 movl $0x00000004,0x10(%esp) 00000d0d movl %edi,0x0c(%esp) 00000d11 movl %esi,0x08(%esp) 00000d15 movl 0x7db4-0xc8d(%ebx),%eax 00000d1b movl %eax,0x04(%esp) 00000d1f movl 0xe4(%ebp),%eax 00000d22 movl %eax,(%esp) 00000d25 calll _objc_msgSend 00000d2a xorl %edx,%edx 00000d2c movl %edi,%eax 00000d2e shrl $0x03,%eax 00000d31 jmp 0x00000d34 00000d33 incl %edx 00000d34 cmpl %edx,%eax 00000d36 ja 0x00000d33 00000d38 movl %esi,(%esp) 00000d3b calll _free 00000d40 movb $0x01,_isAuthenticated-0xc8d(%ebx) 00000d47 movzbl _isAuthenticated-0xc8d(%ebx),%eax 00000d4e movl 0xf4(%ebp),%ebx 00000d51 movl 0xf8(%ebp),%esi 00000d54 movl 0xfc(%ebp),%edi 00000d57 leave 00000d58 ret

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is 1 wire communication possible?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • using in-line asm to write a for loop with 2 comparisons

    - by aCuria
    I want to convert the for loop in the following code into assembly but i am not sure how to start. An explanation of how to do it and why it works would be appreciated. I am using VS2010, C++, writing for the x86. The code is as follows: for (n = 0; norm2 < 4.0 && n < N; ++n) { __asm{ ///a*a - b*b + x fld a // a fmul st(0), st(0) // aa fld b // b aa fmul st(0), st(0) // bb aa fsub // (aa-bb) // st(0) - st(1) fld x // x (aa-bb) fadd // (aa-bb+x) /// 2.0*a*b + y; fld d // d (aa-bb+x) fld a // d a (aa-bb+x) fmul // ad (aa-bb+x) fld b // b ad (aa-bb+x) fmul // abd (aa-bb+x) fld y // y adb (aa-bb+x) fadd // b:(adb+y) a:(aa-bb+x) fld st(0) //b b:(adb+y) a:(aa-bb+x) fmul st(0), st(0) // bb b:(adb+y) a:(aa-bb+x) fld st(2) // a bb b:(adb+y) a:(aa-bb+x) fmul st(0), st(0) // aa bb b:(adb+y) a:(aa-bb+x) fadd // aa+bb b:(adb+y) a:(aa-bb+x) fstp norm2 // store aa+bb to norm2, st(0) is popped. fstp b fstp a } }

    Read the article

  • bin-deploying DLLs banned in leiu of GAC on shared IIS 6 servers

    - by craigmoliver
    I need to solicit feedback about a recent security policy change at an organization I work with. They have recently banned the bin-deployment of DLLs to shared IIS 6 application servers. These servers host many isolated web application pools. The new rules require all DLLs to be installed in GAC. The is a problem for me because I bin-deploy several dlls including the ASP.NET MVC Framework, HTML Agility Pack, ELMAH, and my own shared class libraries. I do this because: Eliminates web application server dependencies to the Global Assembly Cache. Allows me (the developer) to have control of what goes on inside my application. Enables the application to deployed as a "package". Removes application deployment burden from the server administrators. Now, here are my questions. From a security perspective what are the advantages to using the GAC vs. bin-deployment? Is it possible to host multiple versions of the same DLL in the GAC? Has anyone run into similar restrictions?

    Read the article

  • maven assemblies. Putting each dependency with transitive dependencies in own directory?

    - by jr
    I have a maven project which consists of a few modules. This is to be deployed on a client machine and will involve installing Tomcat and will make use of NSIS for installer. There is a separate application which monitors tomcat and can restart it, perform updates, etc. So, I have the modules setup as follows: project +-- client (all code, handlers, for the war) +-- client-common - (shared code, shared between monitor and client) +-- client-web - (the war, basically just uses war has applicationcontext, web.xml,etc) +-- monitor - (the monitor application jar. Uses wrapper to run) So, I need to create an installer. I was planning on creating another module which would be the installer. This is where I would have tomcat directory and I'd like maven to "assemble" everything and then run NSIS so I can create the final installer. However, I need to have the monitor jar file in a directory and then have all monitors dependencies in a lib/ directory. The final directory structure should be: project-installer-directory/monitor/monitor-version.jar project-installer-directory/monitor/lib/monitor-dep-1.jar project-installer-directory/monitor/lib/monitor-dep-2.jar project-installer-directory/monitor/lib/monitor-dep-3.jar project-installer-directory/webapps/client-web.war Where in the client-web\WEB-INF\lib directory we will have all client-web's dependencies after it is exploded. That works, I have the .war file. What I am having problems with is getting the monitor module dependencies independent of the dependencies of the client-web module. I tried to just create the installer module and make the monitor and client-web dependencies, but when I use dependencies-copy it gives me everything. Not what I want. I'm leaning towards creating a new module called monitor-assembly or something to give me a zip file which contains the directory format I need, but that is yet another module. Can someone please help me with the correct way to accomplish this? thanks!

    Read the article

  • Maven - 'all' or 'parent' project for aggregation?

    - by disown
    For educational purposes I have set up a project layout like so (flat in order to suite eclipse better): -product | |-parent |-core |-opt |-all Parent contains an aggregate project with core, opt and all. Core implements the mandatory part of the application. Opt is an optional part. All is supposed to combine core with opt, and has these two modules listed as dependencies. I am now trying to make the following artifacts: product-core.jar product-core-src.jar product-core-with-dependencies.jar product-opt.jar product-opt-src.jar product-opt-with-dependencies.jar product-all.jar product-all-src.jar product-all-with-dependencies.jar Most of them are fairly straightforward to produce. I do have some problem with the aggregating artifacts though. I have managed to make the product-all-src.jar with a custom assembly descriptor in the 'all' module which downloads the sources for all non-transitive deps, and this works fine. This technique also allows me to make the product-all-with-dependencies.jar. I however recently found out that you can use the source:aggregate goal in the source plugin to aggregate sources of the entire aggregate project. This is also true for the javadoc plugin, which also aggregates through the usage of the parent project. So I am torn between my 'all' module approach and ditching the 'all' module and just use the 'parent' module for all aggregation. It feels unclean to have some aggregate artifacts produced in 'parent', and others produced in 'all'. Is there a way of making an 'product-all' jar in the parent project, or to aggregate javadoc in the 'all' project? Or should I just keep both? Thanks

    Read the article

  • Handling Types Defined in Plug-ins That Are No Longer Available

    - by Chris
    I am developing a .NET framework application that allows users to maintain and save "projects". A project can consist of components whose types are defined in the assemblies of the framework itself and/or in third-party assemblies that will be made available to the framework via a yet-to-be-built plug-in architecture. When a project is saved, it is simply binary-serialised to file. Projects are portable, so multiple users can load the same project into their own instances of the framework (just as different users may open the same MSWord document in their own local copies of MSWord). What's more, the plug-ins available to one user's framework might not be available to that of another. I need some way of ensuring that when a user attempts to open (i.e. deserialise) a project that includes a type whose defining assembly cannot be found (either because of a framework version incompatibility or the absence of a plug-in), the project still opens but the offending type is somehow substituted or omitted. Trouble is, the research I've done to date does not even hint at a suitable approach. Any ideas would be much appreciated, thanks.

    Read the article

  • How to insert zeros between bits in a bitmap?

    - by anatolyg
    I have some performance-heavy code that performs bit manipulations. It can be reduced to the following well-defined problem: Given a 13-bit bitmap, construct a 26-bit bitmap that contains the original bits spaced at even positions. To illustrate: 0000000000000000000abcdefghijklm (input, 32 bits) 0000000a0b0c0d0e0f0g0h0i0j0k0l0m (output, 32 bits) I currently have it implemented in the following way in C: if (input & (1 << 12)) output |= 1 << 24; if (input & (1 << 11)) output |= 1 << 22; if (input & (1 << 10)) output |= 1 << 20; ... My compiler (MS Visual Studio) turned this into the following: test eax,1000h jne 0064F5EC or edx,1000000h ... (repeated 13 times with minor differences in constants) I wonder whether i can make it any faster. I would like to have my code written in C, but switching to assembly language is possible. Can i use some MMX/SSE instructions to process all bits at once? Maybe i can use multiplication? (multiply by 0x11111111 or some other magical constant) Would it be better to use condition-set instruction (SETcc) instead of conditional-jump instruction? If yes, how can i make the compiler produce such code for me? Any other idea how to make it faster? Any idea how to do the inverse bitmap transformation (i have to implement it too, bit it's less critical)?

    Read the article

  • Programming Environment for a Motorola 68000 in Linux

    - by Nick Presta
    Greetings all, I am taking a Structure and Application of Microcomputers course this semester and we're programming with the Motorola 68000 series CPU/board. The course syllabus suggests running something like Easy68K or Teesside Motorola 68000 Assembler/Emulator at home to test our programs. I told my prof I run x64 Linux and asked what sort of environment I would need to complete my coursework. He said that the easiest environment to use is a Windows XP 32bit VM with one of the two suggested applications installed, however, he doesn't really care what I use as long as I can test what I write at home. So I'm asking if there exists some sort of emulator or environment for Linux so I can test my code, and what sort of caveats I will run into by writing and testing my code in Linux. Also, I plan to do my editing in Vim, which probably isn't a problem, but I would like any insight into editors for 68000 assembly, if you have any. Thanks! EDIT: Just to clarify - I don't want to install Linux on the board at all - I want to program on my home machine, test the code locally, and then bring it onto the board for grading/running.

    Read the article

  • Question about CALL statement

    - by Bruce
    I have the following code in VC++ Func5(){ StackWalk(); } Func4{ Func5();} I am a Beginner in x86 Assembly Language. I am trying to find out the starting address of Func5(). I get the Func5()'s return address from its stack frame. Now before this return address there should be a CALL statement. So I extract out the bytes before the return address. Sometimes it's a near call like E8 ff ff ff d8. So for this statement I subtract the offset 0x28 from the function's return address to get Func5()'s base address (where it resides in memory). The problem is I don't know how to calculate this for a indirect NEAR call. I have been trying to find out how to do it for some time now. So I have extracted out the first 5 bytes before the return address and they are ff 75 08 ff d2 I think this stands for CALL ECX (ff d2) but I am not sure. I will be very grateful if someone can tell me what kind of CALL statement this is and how I can calculate the function's base address from this kind of call.

    Read the article

  • Why IDE has to be made in the language they are designed for?

    - by Em Ae
    Look at IntellijIDEA IDE, its a pretty sick ide but its made in Java and we all know that Java suck at GUI. Same goes for Eclipse. Though its way better and adopted SWT but it could have been best if it was developed in C/C++. We have really good systems now and thats why we don't feel that these IDES are nothing much but a memory hog. Why the IDE's have to be written in the language they are designed for ? Okay i know that IDE is a cool way to show how strong a language can be but even then someitmes, that specific language might not be best for a particular tastk.

    Read the article

  • Is sticking to one language on a particular project a good practice?

    - by Ans
    I'm developing a pipeline for processing text that will go into production. The question I keep asking myself is: should I stick to one language for the project when I'm looking for a tool to do a particular task (e.g. NLTK, PDFMiner, CLD, CRFsuite, etc.)? Or is it OK to mix and match languages on the project? So I pick the best tool regardless of what language it's written in (e.g. OpenNLP, ParsCit, poppler, CFR++, etc.) and warp (wrap) my code around it? Note, I am not asking about should a developer stick to just one language for their career.

    Read the article

  • Have you ever used a non mainstream language in a project? Why?

    - by EpsilonVector
    I was thinking about my academic experience with Smalltalk (well, Squeak) a while ago and whether I would like to use it for something, and it got me thinking: sure, it's as good and capable as any popular language, and it has some nice ideas, but there are certain languages that are already well entrenched in certain niches of programming (C is for systems programming, Java is for portability, and so on...), and Smalltalk and co. don't seem to have any obvious differentiating features to make them the right choice under certain circumstances, or at least not as far as I can tell, and when you add to it the fact that it's harder to find programmers who know it it adds all sorts of other problems for the organization itself. So if you ever worked on a project where a non-mainstream language (like Smalltalk) was used over a more mainstream one, what was the reason for it? To clarify: I'd like to focus this on imperative languages, since other paradigms like functional and logic programming language, while not necessarily mainstream, can still be good choices for certain projects for obvious reasons.

    Read the article

  • Keep a programming language backwards compatible vs. fixing its flaws

    - by Radu Murzea
    First, some context (stuff that most of you know anyway): Every popular programming language has a clear evolution, most of the time marked by its version: you have Java 5, 6, 7 etc., PHP 5.1, 5.2, 5.3 etc. Releasing a new version makes new APIs available, fixes bugs, adds new features, new frameworks etc. So all in all: it's good. But what about the language's (or platform's) problems? If and when there's something wrong in a language, developers either avoid it (if they can) or they learn to live with it. Now, the developers of those languages get a lot of feedback from the programmers that use them. So it kind of makes sense that, as time (and version numbers) goes by, the problems in those languages will slowly but surely go away. Well, not really. Why? Backwards compatibility, that's why. But why is this so? Read below for a more concrete situation. The best way I can explain my question is to use PHP as an example: PHP is loved thousands of people and hated by just as many thousands. All languages have flaws, but apparently PHP is special. Check out this blog post. It has a very long list of so called flaws in PHP. Now, I'm not a PHP developer (not yet), but I read through all of it and I'm sure that a big chunk of that list are indeed real issues. (Not all of it, since it's potentially subjective). Now, if I was one of the guys who actively develops PHP, I would surely want to fix those problems, one by one. However, if I do that, then code that relies on a particular behaviour of the language will break if it runs on the new version. Summing it up in 2 words: backwards compatibility. What I don't understand is: why should I keep PHP backwards compatible? If I release PHP version 8 with all those problems fixed, can't I just put a big warning on it saying: "Don't run old code on this version !"? There is a thing called deprecation. We had it for years and it works. In the context of PHP: look at how these days people actively discourage the use of the mysql_* functions (and instead recommend mysqli_* and PDO). Deprecation works. We can use it. We should use it. If it works for functions, why shouldn't it work for entire languages? Let's say I (the developer of PHP) do this: Launch a new version of PHP (let's say 8) with all of those flaws fixed New projects will start using that version, since it's much better, clearer, more secure etc. However, in order not to abandon older versions of PHP, I keep releasing updates to it, fixing security issues, bugs etc. This makes sense for reasons that I'm not listing here. It's common practice: look for example at how Oracle kept updating version 5.1.x of MySQL, even though it mostly focused on version 5.5.x. After about 3 or 4 years, I stop updating old versions of PHP and leave them to die. This is fine, since in those 3 or 4 years, most projects will have switched to PHP 8 anyway. My question is: Do all these steps make sense? Would it be so hard to do? If it can be done, then why isn't it done? Yes, the downside is that you break backwards compatibility. But isn't that a price worth paying ? As an upside, in 3 or 4 years you'll have a language that has 90 % of its problems fixed.... a language much more pleasant to work with. Its name will ensure its popularity. EDIT: OK, so I didn't expressed myself correctly when I said that in 3 or 4 years people will move to the hypothetical PHP 8. What I meant was: in 3 or 4 years, people will use PHP 8 if they start a new project.

    Read the article

  • How much time takes to a new language like D to become popular? [closed]

    - by Adrián Pérez
    I was reading about new languages for me to learn and I find very good comments about D, like it's the new C or what C++ should have been. Knowing that many people say wonders about the language, I'm wondering how much time usually takes to a language to become popular. This is, having libraries ported or written natively for this language and being used in serious software development. I have read about the history of Java, and Python to figure it out, but may be they are too high level complexity to say their development could take the same time as will take for D.

    Read the article

  • How important is using the same language for client and server?

    - by Makita
    I have been evaluating architecture solutions for a mobile project that will have a web-service/app in addition to native apps and have been looking at various libraries, frameworks, and stacks like Meteor, this being a sort of "open stack package framework", is tightly bound with Node.js. There is a lot of talk about the benefits of using the same language both client and server side, and I'm not getting it. I could understand if you want to mirror the entire state of a web application on both client and server but struggling to find other wins... Workflow efficiency? I'm trying to understand why client/server language parity is considered to be a holy grail. Why does client/server language parity matter in software development?

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >