Code Monkey home page Code Monkey logo

middleware's People

Contributors

bluette-c-riviere avatar cristianpq avatar forsthug avatar francescomatalonifiskaltrust avatar leilazarban avatar mijomilicevic avatar moritzfamira avatar paulcristiann avatar pawelvds avatar saschaknapp avatar stefankert avatar tschmiedlechner avatar volllly avatar wrueting avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

middleware's Issues

FCC 4.0.8: Include metrics option

Background:
It's possible to set up a monitoring environment for the Fiscal-Cloud-Connector. The FCC creates metrics that are then fetched by monitoring solutions like prometheus and can be visualized in grafana. The metrics need to be activated in the FCC. We have at least one customer who wants to set up FCC monitoring.

HowTo activate:
There are two options to activate the metrics:

  • At the time of installation/initialization of the FCC: For this the parameter "enable-components metrics" is available
    image

  • After the FCC installation/initialization the metrics can be activated by adding the following parameter to the run_fcc.bat -Dspring.profiles.active=metrics

It should be possible to activate the metrics in the scu configuration.

HowTo test:
You can test if the metrics work without having to set up prometheus by opening the following URL in your browser. If data is shown there, then the metrics are active:
localhost:20001/actuator/metrics/http.server.requests
Port can be different if a individual fccPort was configured

More info:
https://documentation.fiskal.cloud/md/fiskal_cloud_connector/fiskal_cloud_connector/#metrics
https://documentation.fiskal.cloud/md/fiskal_cloud_connector/fiskal_cloud_connector/#list-of-installer-parameters-for-unattended-mode

Tasks

  • Add boolean SCU parameter EnableFccMetrics to DeutscheFiskal and SwissbitCloud SCUs
  • If the SCU parameter is set to true, add Dspring.profiles.active=metrics parameter when starting the FCC
  • Update SCU parameter documentation on docs.fiskaltrust.cloud
  • Test if metrics are generated (as described above)

Protocol receipts should not be signed in Austria

Problem

When using the ftReceiptcase 0x415400000000000D ("protocol"), it is signed if it includes ftChargeItems and the ftPayItems are empty. This means that these receipts are also included in the DEP7 export, but they shouldn't. This is incorrect according to the RKSV regulations and has been criticized by auditors in two cases now.

Solution

Protocol receipts without pay items shouldn't be signed either (and neither should be those with pay items, but that's already the case).

Warning

We will only change this behavior in 1.3.

Tasks

  • Do not sign protocol receipts without pay items in 1.3
  • Test in 1.3 (sample request below)
  • Add unit/integration tests to reproduce the behavior
  • Inform @mijomilicevic, so that he can reach out to the affected customers

Sample

Request:

{
    "ftCashBoxID": "620cc7c3-78ab-4f70-b75b-63fad09e4b78",
    "ftPosSystemId": "b3dc6573-96d9-e611-80f7-5065f38adae1",
    "cbTerminalID": "18566",
    "cbReceiptReference": "489fda45-4994-4bb4-b206-2656483561aa",
    "cbReceiptMoment": "2023-12-12T20:58:55.083Z",
    "cbChargeItems": [
        {
            "Quantity": 1.0000,
            "Description": "Süß gsp 1/4",
            "Amount": 3.50000000000000000000000000,
            "VATRate": 20.0000,
            "ftChargeItemCase": 4707387510509010947,
            "ftChargeItemCaseData": "",
            "VATAmount": 0.5833333333333333333333333333,
            "CostCenter": "2",
            "ProductGroup": "Wein",
            "ProductNumber": "5014",
            "ProductBarcode": "",
            "Unit": "Liter",
            "Moment": "2023-12-12T20:58:38.833Z"
        },
        {
            "Quantity": 1.0000,
            "Description": "Karpfen geb",
            "Amount": 14.90000000000000000000000000,
            "VATRate": 10.0000,
            "ftChargeItemCase": 4707387510509010945,
            "ftChargeItemCaseData": "",
            "VATAmount": 1.354545454545454545454545455,
            "CostCenter": "2",
            "ProductGroup": "Fisch",
            "ProductNumber": "14004",
            "ProductBarcode": "",
            "Unit": "Stk",
            "Moment": "2023-12-12T20:58:30.897Z"
        },
        {
            "Quantity": 1.0000,
            "Description": "Somgsp Weiß 1/2",
            "Amount": 4.20000000000000000000000000,
            "VATRate": 20.0000,
            "ftChargeItemCase": 4707387510509010947,
            "ftChargeItemCaseData": "",
            "VATAmount": 0.70000000000000000000000000,
            "CostCenter": "2",
            "ProductGroup": "Wein",
            "ProductNumber": "5004",
            "ProductBarcode": "",
            "Unit": "Liter",
            "Moment": "2023-12-12T20:58:35.893Z"
        },
        {
            "Quantity": 1.0000,
            "Description": "Verlängerter",
            "Amount": 3.00000000000000000000000000,
            "VATRate": 20.0000,
            "ftChargeItemCase": 4707387510509010947,
            "ftChargeItemCaseData": "",
            "VATAmount": 0.50000000000000000000000000,
            "CostCenter": "2",
            "ProductGroup": "Kaffee",
            "ProductNumber": "4007",
            "ProductBarcode": "",
            "Unit": "Stk",
            "Moment": "2023-12-12T20:58:42.067Z"
        },
        {
            "Quantity": 1.0000,
            "Description": "Schweincordon",
            "Amount": 14.50000000000000000000000000,
            "VATRate": 10.0000,
            "ftChargeItemCase": 4707387510509010945,
            "ftChargeItemCaseData": "",
            "VATAmount": 1.318181818181818181818181818,
            "CostCenter": "2",
            "ProductGroup": "Hauptspeisen",
            "ProductNumber": "13017",
            "ProductBarcode": "",
            "Unit": "Stk",
            "Moment": "2023-12-12T20:58:30.133Z"
        },
        {
            "Quantity": 1.0000,
            "Description": "Somgsp Weiß 1/4",
            "Amount": 2.40000000000000000000000000,
            "VATRate": 20.0000,
            "ftChargeItemCase": 4707387510509010947,
            "ftChargeItemCaseData": "",
            "VATAmount": 0.40000000000000000000000000,
            "CostCenter": "2",
            "ProductGroup": "Wein",
            "ProductNumber": "5005",
            "ProductBarcode": "",
            "Unit": "Liter",
            "Moment": "2023-12-12T20:58:36.103Z"
        },
        {
            "Quantity": 1.0000,
            "Description": "GartenRadler 0,5",
            "Amount": 4.70000000000000000000000000,
            "VATRate": 20.0000,
            "ftChargeItemCase": 4707387510509010947,
            "ftChargeItemCaseData": "",
            "VATAmount": 0.7833333333333333333333333333,
            "CostCenter": "2",
            "ProductGroup": "Bier",
            "ProductNumber": "1004",
            "ProductBarcode": "",
            "Unit": "Liter",
            "Moment": "2023-12-12T20:58:46.903Z"
        },
        {
            "Quantity": 1.0000,
            "Description": "Cola 0,33",
            "Amount": 3.50000000000000000000000000,
            "VATRate": 20.0000,
            "ftChargeItemCase": 4707387510509010947,
            "ftChargeItemCaseData": "",
            "VATAmount": 0.5833333333333333333333333333,
            "CostCenter": "2",
            "ProductGroup": "Alkoholfrei",
            "ProductNumber": "2022",
            "ProductBarcode": "",
            "Unit": "Liter",
            "Moment": "2023-12-12T20:58:44.99Z"
        }
    ],
    "cbPayItems": [],
    "ftReceiptCase": 4707387510509010957,
    "cbReceiptAmount": 50.70,
    "cbUser": "Ernst"
}

Response:

{
    "ftCashBoxID": "620cc7c3-78ab-4f70-b75b-63fad09e4b78",
    "ftQueueID": "cb5ee991-a682-41e4-b066-2c18477677b4",
    "ftQueueItemID": "e9b083d9-ad54-48f7-bf26-50c913044abc",
    "ftQueueRow": 95900,
    "cbTerminalID": "18566",
    "cbReceiptReference": "489fda45-4994-4bb4-b206-2656483561aa",
    "ftCashBoxIdentification": "rk-01",
    "ftReceiptIdentification": "ft1769B#91036",
    "ftReceiptMoment": "2023-12-12T20:58:55.189502Z",
    "ftSignatures": [
        {
            "ftSignatureFormat": 3,
            "ftSignatureType": 4707387510509010945,
            "Caption": www.fiskaltrust.at,
            "Data": "_R1-AT1_rk-01_ft1769B#91036_2023-12-12T21:58:55_21,30_29,40_0,00_0,00_0,00_PTaI8H0=_588fa483_Jjzb06c1fyA=_J7BuwN0wJJLokEPixJn58ttBvLBAHff6qSmSctbb8VZsBQxyc1ChTAqflI2DJon3e1fjRZnFDTABtutzPUf5DA=="
        }
    ],
    "ftState": 4707387510509010944
}

DE - FCC SetFccHeapMemory has no effect on run script

Description

In package version fiskaltrust.Middleware.SCU.DE.DeutscheFiskal v1.3.40 we introduced the possibility to set the heap size for the FCC to tackle issues with OutOfMemory exceptions with large TAR files. The parameter "SetFccHeapMemory" can be set in the portal and is supposed to let the fcc run with this value in the cases of...

  • FCC initialization
  • FCC update
  • FCC regular service run

The Problem

It seems that the heap setting is not having an effect on the regular service running. We use the run_fcc.bat/sh to start the FCC. Therefore the heap in the run_fcc.bat (-Xmx) should change to the value set in SetFccHeapMemory. This doesn't happen. The fcc always starts with -xmx 256 even if the value is set. I tried to find the reason but i don't see any issue with the function we use to change the .bat:

public static void SetFccHeapMemoryForRunScript(string fccDirectory, int fccHeapMemory)

To Reproduce

  • Set the "SetFccHeapMemory" in the SCU with a value different than 256 (e.g. 1024)
  • RebuildConfiguration
  • Restart the middleware service
  • Check the middleware log for the FCC startup and see if the -xmx value has the value you configured in the SCU

Tasks

  • The FCC should start with the correct heap memory
  • The logs should indicate the correct value

Error handling improvement suggestion from customer-email

Dear fiskaltrust team,

we kindly ask, if it is possible to extend your IPos.v0 interface.
Currently, erroneous requests are rejected where we would expect a fiskaltrust.ifPOS.v0.ReceiptResponse with some error details.
This is very difficult to handle.

If, for instance, we would send a fiskaltrust.ifPOS.v0.ReceiptRequest with an invalid value in ftCashBoxID, we suggest to return an answer like the JSON sample below:

  • ftState should indicate a reject, e.g. "0x44450000000000FF"
  • ftStateData could return the reason for the rejection, e.g. "ftCashBoxID invalid"
    Another place to return the reason could be a ftSignatureItem with a special ftSignatureType (e.g. "0x44450000000000FF")
  • Since the data is not stored on your side, your return values, e.g, ftQueueItemID and ftQueueRow , should be empty or -1.
  • To make it more obvious, that this is a rejection, no data from the cbReceiptRequest needs to be returned, e.g. ftCashBoxID.

In JSON the fiskaltrust.ifPOS.v0.ReceiptResponse Would then be like this:
{
"ftCashBoxID": "",
"ftQueueID": "",
"ftQueueItemID": "",
"ftQueueRow": -1,
"cbTerminalID": "",
"cbReceiptReference": "",
"ftCashBoxIdentification": "",
"ftReceiptIdentification": "",
"ftReceiptMoment": "2021-04-30T04:35:57.126946Z",
"ftSignatures": [
{
"ftSignatureFormat": "0x1",
"ftSignatureType": "0x44450000000000FF",
"Caption": "Error",
"Data": " ftCashBoxID invalid "
}
],
"ftState": "0x44450000000000FF",
"ftStateData": "{"Error":" ftCashBoxID invalid"}"
}

Fiskaly number of transactions maxes at 100

The current implementation of the FiskalyCertified SCU doesn't report the actual count of open transactions if there are more than 100. This is due to the fact that the Fiskaly API uses pagination and we are basing the count on the retruned active transactions.

Since the case of > 100 open transactions is not really likely we shouldn't have to download all transactions to the scu because that could delay the startup.

What we should fix though is the count that is being reported. The TSS endpoint offers the actual count of open transactions (https://developer.fiskaly.com/api/kassensichv/v2#operation/retrieveTss) so we can use that instead of the current implementation

Improve Exception message when Initiate SCU Switch fails

Is your feature request related to a problem? Please describe.

When initiating an SCU Switch, and it fails to a wrong configuration in the source or target SCU, the error message is the same and gives no further hint.

Describe the solution you'd like

I would suggest adding information about the source of the exception to the message so troubleshooting is easier.

Add SwissbitCloud support on Android

Is your feature request related to a problem? Please describe.

Eventually, we should add support for the SwissbitCloud TSE on Android.

Describe the solution you'd like

This TSE differs a lot from the "desktop" version, and should IMO be treated as a completely new SCU. It does not contain any endpoints/client operations for administrative actions (like registering clients or starting/getting exports), and works via a .jar client Swissbit provides.

Additional context

If we don't want to change our IDESSCD interface, a possible solution to workaround these limitations could be to offer a fiskaltrust-hosted cloud service that wraps these methods. Swissbit technically offers cloud APIs to do these things (via the ERS API), but authentication via TSE creds is not supported there, hence we'd need to wrap this too.

DE Fiskaly: Split TAR export to avoid "E_TOO_MANY_RECORDS" exception

Issue

Fiskaly has a export limit of 1000000 signatures. Exceeding this limit results in the TAR export failing with the exception E_TOO_MANY_RECORDS. We have customers who reach this limit.

Possible fix

Fiskaly shared the following information with us:

We do recommend that you split your request to avoid triggering too much data at once.

E.g. if tss.signature_counter is equal to 2500000, then a complete export of all TSS records should be done in 3 batches:

  1. First export with end_signature_counter=1000000
  2. Second export with start_signature_counter=1000001 and end_signature_counter=2000000
  3. Third export with start_signature_counter=2000001

Further information

If you need to know which customer is affected on our side, then please just send me a message on slack to keep this private.

Tasks

  • Create a test TSS with many transactions
  • Before running a fiskaly export, check how many signatures will be exported (by comparing the current signature counter with the previous one from the metadata property)
    • This will return the transaction number difference, which is not the same as the signature difference that we need here
    • To get the amount of exported signatures from the number of transactions, you can do this (simplified): Number of signatures = Number of Transactions * 2.5
  • If the export would include more than 800k signatures, split the export to multiple requests, and combine the resulting TAR file (like we do in the DeutscheFiskal SCU here)
  • Test the split & merge functionality by temporary setting the split range to e.g. 100 (to avoid creating a super large TSE)

Update FCC to 4.0.8 in SwissbitCloud and DF SCUs

Background

The DeutscheFiskal released several new versions since our last update (we're on version 4.0.5, latest is 4.0.8. Most of the changes (listed here) are not super relevant for us, but a breaking change was introduced in 4.0.8 and our MW is not compatible with this version anymore.

Problem

A breaking change was introduced in FCC 4.0.8: the GET /info and POST /registration endpoints now require authentication. While we already have that in place for the latter, we use the /info endpoint without auth, leading to a failing GetTseInfo() method.

Solution

We need add authentication to the info endpoint, and should update our internally used FCC to v4.0.8.
Additionally, we need to verify that the info endpoint of old FCCs can be used with authentication as well, in case someone wants to use older versions.

Tasks

  • Add admin authentication to the client calling the info endpoint
  • Ensure that other endpoints (i.e. /registration) are correctly authenticated
  • Test if these changes still work with FCCs older than 4.0.8 (easiest by doing this before updating to 4.0.8)
  • Update the default used FCC version to 4.0.8
  • Upload the FCC packages to our download storage
  • Release the DeutscheFiskal and SwissbitCloud SCUs v.1.3.55 (or 56?)
  • Write release notes

Support creation of X-Reports for Epson RT Printer

Epson RT Printers support printing a so called X-01 financial report. This report is used in various different business processes and therefore is necessary to be available via the Middleware.

While the need of this report is clear, retrieving all the necessary data for this report is usually a feature that would be covered by the Middleware (e.g. via a specific Journal endpoint) or a Portal export. Even though we will recommend integrators to use one of these capabilities to get the financial data or probably an export that is generated by their POS, some processes still require the actual printed document.

To support getting the printed documents we will add an additional local ftReceiptCaseFlag (_4954_2III_0000_0000). This flag is only supported with a given set of SCUs and will not be part of the main interface.

This flag is specific to Italy and will have a different functionality in other Markets.

The following defaults should be taken in consideration in documentation and implementation 

  • If the SCU that is connected does not support printing X-Reports the flag will be ignored. This allows us to easily switch between SCUs and still give integrators flexibility on integration
  • Official recommendation is using either reports from your POS or the once we provide in portal. Since we consider the RT Printers being only a Signature Device, printing the receipt via it is just a workaround. POS Systems do usually give a clearer picture and can handle more of the cases in a better way than the printer so this should give end-users better visibility.
  • The flag will be supported in the Zero Receipt only (0x4954_2000_0000_2000) . Since this is a very specific action that will be called on demand it is a very similar process as we are using now in DE for some receipts.
  • The flag is only evaluated in the SCU. The Queue will just pass-through the zero receipt and mustn't change behavior

Since the local flags don't follow a specific logic and just need to be additive we will start with 001

0x4954_2001_0000_2000

ToDos

  • Extend Epson RT Printer SCU to support X Report Flag
  • Extend Custom RT Printer SCU to support X Report Flag
  • Extend Documentation with new local ReceiptCaseFlag

Restoring nuget packages without access to private feed not working

Currently it is not possible to build / debug the Middleware without having access to the private nuget feed that is being used in the pipeline. Most of the packages are already publicly available but some of them (e.g. fiskaltrust.storage) are only internal. We should find a way to make these packages available so that contributors can build the middleware

Unify common middleware processes

Currently the queue is only used for basic operations (e.g. creating queue items & chaining) and all other steps are forwarded to the market specific processor. This can lead to make implementations of common processes (e.g. daily closing) drift apart.

Also we need to re-implement all these common processes for each new market.

By unifying these processes and creating a shared implementation in the basic queue we will be able to enforce the same processes in all markets and also allow users from markets that are not implemented yet to still use all common processes for a new market by just sending the market code at the beginning of the receipt case (switching from DE 4445 to IT 4954).

The processes that should be shared are mainly those described in the general part of the interface documentation. https://docs.fiskaltrust.cloud/docs/poscreators/middleware-doc

Common receipts

While there might be specific actions required for each of these processes (e.g. DE requires TAR files to being exported on a daily closing), we should be able to find a common way to implement the following processes:

  • Zero Receipt
  • Start Receipt
  • Stop Receipt
  • End of Failure Receipt
  • Daily Closing
  • Monthly Closing
  • Yearly Closing
  • Handwritten receipt (?)
  • "Nacherfassung" (?)

Default receipt - Chaining / Security Mechanism

In addition to that we could think about implementing a common receipt that can be used with a common sign processor without calling country specific logic. This could be especially helpful for markets that don't really depend on specific external components (like a TSE) and can already benefit from the SecurityMechanism.

Common error handling

While we have some specific things to check for each market there are some prerequisites that we expect when operating the middleware. The following is a list of these processes and can be potentially extended:

  • Allowing receipts different than start receipt only when queue is started
  • Checking the sum amounts of pay items & charge items
  • ?

Swissbit tse Chunked Export: safe last exported transaction number, add iteration value

When using the chunked export we want to save the last exported transaction number and export a chunk from this last exported transaction number and then increase the last exported transaction number to the new value.

This will create a journal entry at each daily closing containing only the transactions in this range. This way we will continuously export all transactions without running into OOM exceptions.

Open questions

Can we delete the transaction ranges in the chunks from the TSE or can we only delete everything once we have exported everything.

Monthly and yearly closing is only half-way processed when SCU is not reachable

Right now, when doing a monthly or yearly closing receipt in Germany and the SCU is not reachable during that time (or fails, e.g. due to timeouts, etc.) we do return a "failed receipt", but don't process the monthly or yearly closing. This leads to some follow-up-issues, e.g.:

  1. The closings don't show up in the receipt journal in the Portal (minor problem)
  2. Automatic export ranges - like MeinFiskal and PosArchive - are calculated based on these action journals.

should we do:

  • Create the ActionJournal (with the required fields, check Helipad and Portal for those)

DE 2024 KassenSichV / TSE SerialNumber on receipt

The new AEAO / KassenSichV valid from 1.1.2024 requires the TSE serialnumber to be printed on the receipt if the QR-Code isn't used. Currently the middleware doesn't return the TSE serialnumber in the ftSignatureTypes.

What we need:

  • return the TSE serial number in the ftSignatureTypes

legal baseline:

AEAO zu §146
https://www.bundesfinanzministerium.de/Content/DE/Downloads/BMF_Schreiben/Weitere_Steuerthemen/Abgabenordnung/AO-Anwendungserlass/2023-06-30-AEAO-Par-146-AO.pdf?__blob=publicationFile&v=2

2.4.4 Der Beleg muss mindestens folgende Angaben enthalten:

  1. Den vollständigen Namen und die vollständige Anschrift des leistenden Unternehmers (vgl. § 6 Satz 1 Nr. 1 KassenSichV).
  2. Das Datum der Belegausstellung und den Zeitpunkt des Vorgangsbeginns sowie den Zeitpunkt der Vorgangsbeendigung (vgl. AEAO zu § 146a, Nr. 2.2.3.3)
  3. Die Menge und die Art der gelieferten Gegenstände oder den Umfang und die Art der sonstigen Leistung (vgl. auch AEAO zu § 146, Nr. 2.1.3).
  4. Die Transaktionsnummer i. S. d. § 2 Satz 2 Nr. 2 KassenSichV (vgl. AEAO zu § 146a, Nr. 2.2.2)
  5. Das Entgelt und den darauf entfallenden Steuerbetrag für die Lieferung oder sonstige Leistung in einer Summe sowie den anzuwendenden Steuersatz oder im Fall einer Steuerbefreiung einen Hinweis darauf, dass für die Lieferung oder sonstige Leistung eine Steuerbefreiung gilt.
    Erfordert ein Geschäftsvorfall (vgl. AEAO zu § 146a, Nr. 1.10) nicht die Erstellung einer Rechnung i. S. d. § 14 UStG, sondern einen sonstigen Beleg (z. B. Lieferschein), wird nicht beanstandet, wenn dieser Beleg nicht den unter § 6 Satz 1 Nr. 5 KassenSichV geforderten Steuerbetrag enthält.
  6. Die Seriennummer des elektronischen Aufzeichnungssystems sowie die Seriennummer des Sicherheitsmoduls.

    Sofern ein QR-Code gemäß Anhang I der DSFinV-K anstelle der für jedermann ohne maschinelle Unterstützung lesbaren Daten verwendet wird, gelten die vorgenannten Anforderungen als erfüllt.

Das Wort „sowie“ ersetzt das Wort „oder“ aus der vorherigen Fassung. Das sind die Anforderungen an den Beleg.

Der Verweis auf die DSFInV-K für den QR-Code zeigt nun folgenden Hinweis:
Hinweis: Um derzeit eine Belegprüfung durchführen zu können, muss der Public-Key mit im QR-Code enthalten sein. Die Seriennummer des TSE entfällt dabei aus Platzgründen (kann für die Validierung aus dem Public-Key errechnet werden (SHA256 als OctetString)).

Zusammengefasst:
Ab 01.01.2024 muss die TSE Seriennummer auf dem Kassenbon mit ausgedruckt werden. Im QR-Code kann diese aus Platzgründen weggelassen werden.

Add more verbose log messages in the SignProcessorDE

It would be great to have more log messages on log level verbose in the DE SignProcessor middleware and the PosReceiptCommand.

Specifically log more between these two log messages to help identifying issues with performance:

2023-10-29 14:12:47.122 +01:00 [VRB] SignProcessorDE.PerformReceiptRequest: Executing command POS receipt.
2023-10-29 14:12:47.751 +01:00 [VRB] SignProcessor.InternalSign: Country specific SignProcessor finished.

These log messages come from here:

in between these two is basically only the ExecuteAsync method of the PosReceiptCommand so we'll need to add log messages to it and what it uses.

Swissbit SCU may throw out of memory exceptions when running very large exports

Describe the bug

The Swissbit SCU (for the hardware TSE) currently may throw OutOfMemoryExceptions when performing very large exports. This happens with several 10k of transactions, but we've also observed this when exporting 7k transactions on very low-scaled machines.

To Reproduce

This can also be reproduced "small-scale"; we can create a few hundred (to thousands) transactions on a TSE, run an export, and monitor the memory consumption (e.g. with the VS Profiler or dotPeek).

Exceptions (if any)

2023-08-06 19:42:41.482 +02:00 [ERR] Failed to execute CacheExportIncrementalAsync - TempFileName: 42308b33-9de7-4e2e-82b0-8dbf3ef06ade
System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
   at System.IO.MemoryStream.set_Capacity(Int32 value)
   at System.IO.MemoryStream.EnsureCapacity(Int32 value)
   at System.IO.MemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count)
   at fiskaltrust.Middleware.SCU.DE.Swissbit.Interop.SwissbitProxy.<>c__DisplayClass35_0.<ExportTarFilteredTransactionAsync>b__1(IntPtr chunk, UInt32 chunkLength, IntPtr callbackData)
   at fiskaltrust.Middleware.SCU.DE.Swissbit.Interop.NativeWormAPI.worm_export_tar_filtered_transaction(IntPtr context, UInt64 transactionNumberStart, UInt64 transactionNumberEnd, IntPtr clientId, IntPtr callback, IntPtr callbackData)
   at fiskaltrust.Middleware.SCU.DE.Swissbit.Interop.SwissbitProxy.<>c__DisplayClass35_0.<ExportTarFilteredTransactionAsync>b__0()
   at fiskaltrust.Middleware.SCU.DE.Swissbit.Helpers.LockingHelper.<PerformWithLock>d__6.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at fiskaltrust.Middleware.SCU.DE.Swissbit.Interop.SwissbitProxy.<ExportTarFilteredTransactionAsync>d__35.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at fiskaltrust.Middleware.SCU.DE.Swissbit.SwissbitSCU.<>c__DisplayClass52_0.<<CacheExportIncrementalAsync>b__0>d.MoveNext()

Further technical details & context

In my opinion, this may be related to the way we handle IntPtrs in SwissbitProxy.ExportTarAsync() - the func_worm_export_tar method is allocating memory in the chunk parameter, which is then not freed. The function pointer we create via Marshal.GetFunctionPointerForDelegate is also not disposed, but I'm not 100% sure if this is the issue - still, wouldn't be bad to free this as well.

We should also check the remaining related methods for similar behavior, and ensure to free unmanaged memory wherever possible.

Local DSFinV-K export fails in 1.3.43 Queue

Describe the bug

After upgrading a Queue to the latest version (1.3.43) the DSFinV-k doesn't work anymore.

To Reproduce

  • Start Queue with version 1.3.43
  • Execute Journal call to retrieve DSFinV-K {{base_url}}/json/v0/Journal?type=4919338167972134914
  • Returns a empty zip archive and log shows a fatal error

Exceptions (if any)

2023-03-01 10:58:47 [FTL] An error occured while generating the DSFinV-K export.
System.MissingMethodException: Method not found: 'CsvHelper.TypeConversion.TypeConverterCache CsvHelper.Configuration.IWriterConfiguration.get_TypeConverterCache()'.
   at fiskaltrust.Exports.DSFinVK.Csv.CsvGenerator.<WriteAsync>d__0`1.MoveNext()
   at System.Runtime.CompilerServices.AsyncTaskMethodBuilder.Start[TStateMachine](TStateMachine& stateMachine)
   at fiskaltrust.Exports.DSFinVK.Csv.CsvGenerator.WriteAsync[T](String path, IEnumerable`1 records)
   at fiskaltrust.Exports.DSFinVK.ModuleFormatters.MasterDataModuleFormatter.<CreateStammAbschlussFileAsync>d__11.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at fiskaltrust.Exports.DSFinVK.ModuleFormatters.MasterDataModuleFormatter.<ProcessAsync>d__10.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at fiskaltrust.Exports.DSFinVK.DSFinVKDailyClosingFormatter.<Process>d__6.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at fiskaltrust.Exports.DSFinVK.DSFinVKExporter.<ExportAsync>d__6.MoveNext()

Further technical details & context

  • Version of the Middleware Launcher: 1.3.40-ci-22280-56930
  • Configuration, e.g. used packages and versions:
    • SwissbitCloud 1.3.43
    • Queue.SQLite 1.3.43
  • Operating system: Windows 11

FR - Growing latencies on copy receipts

Summary

Although the recent change (#192) improved the performance of the signature requests for copy receipts, we're observing an increasing trend for copy receipt latencies, which started to cause performance issues in our integrations.

We have outlets with >200K queue items and a CNumerator of nearly 5K. We noticed that the latencies have direct correlation with the number of queue items, which can be seen in the table below.

The table shows the correlation of copy receipt latency (red, left y-axis) and number of queue items (yellow, right y-axis) for multiple outlets (x-axis):

image

Impact

The impact can be seen in the sub-sections for each store. Each sub-section shows the queue size, CNumerator, and a latency heatmap for:

  • All receipts - signature requests for all receipts
  • Copy receipts - signature requests for copy receipts
  • Other receipts - signature requests for non-copy receipts

TL;DR:

  • Copy receipt latency has a direct correlation with the queue size.
  • While the copy receipt request is being processed, it blocks the queue and impacts latency of other receipt requests.
    • This is expected due to the blocking nature of the queue; the main idea is to highlight the spikes in other receipt types are actually a result of copy receipt requests.
  • Growing copy receipt latencies can be clustered in 2 groups;
    • First group that has higher latency
    • Second group that has lower latency, which is mainly a subsequent copy receipt request. Subsequent copy receipt requests that are made in a certain window is always lower. This can be the result of an index cache on the data layer

Store A

  • Queue size: 216K
  • CNumerator: 4.7K

Latency heatmap for all receipts:

image

Latency heatmap for copy receipts:

image

Latency heatmap for other receipts:

image

When we focus on the copy receipt latencies, it can be seen that the latency of the subsequent copy receipt requests that are made in a certain window is always lower. This can be the result of an index cache on the data layer. The table below shows copy receipt requests that are grouped in equal time windows:

image

Findings:

  • The latency heatmap for copy receipts indicates that the latencies can be clustered in levels; 4-5s and 10-15s
  • The latency heatmap for other receipts indicates that the latencies are mostly in the same level (~500ms).
    • Latencies of other receipts seem to be similar (~500ms) for all receipt types. This can also be seen within table under the summary section.
    • In some cases, an increasing latency can be observed for other receipts. This is a result of the previous copy receipt request blocking the middleware while it is being processed.

Store B

  • Queue size: 194K
  • CNumerator: 3.7K

Latency heatmap for all receipts:

image

Latency heatmap for copy receipts:

image

Latency heatmap for other receipts:

image

Findings:

  • Same as store A.

Store I

  • Queue size: 42K
  • CNumerator: 0.9K

Latency heatmap for all receipts:

image

Latency heatmap for copy receipts:

image

Latency heatmap for other receipts:

image

Findings:

  • Copy receipt request latencies are on the same level (~2.4s-3.1s).
  • Other receipt request latencies are on the same level (~500ms) with spikes up to ~2.8s that is a side effect of the copy receipt request that is processed in the background.

Expectations

As SaaS and PaaS providers, our expectation is to have stable latencies regardless of the queue size. Growing queue size will cause delays or even interruptions in receipt printing, and as a result, degraded customer experience.

CryptoVision v2 Support

Since the CryptoVision v2 has been released (https://www.cryptovision.com/de/cryptovision-tsev2-erhaelt-bsi-zertifikat/) we should validate which changes we do need to make the Middleware work with the new firmware.

This issue should summarize the necessary steps and design decisions that we are making for adding support. As a first step we need to analyze the new interface description and compare it to our current implementation to see which changes are required.

Beside that we will need to check if there are any conceptual changes to the v2 and if we are still able to use File I/O the way we are currently doing it.

Todos

  • Analyze new documentation and compare with current implementation
  • Check if the current implementation will be extended or if we should create a new package
  • Prepare documentation on how to switch v1 to v2
  • The new legislation requires us to collect info on the connection to the so called Sperrliste we should check how we can get this information

Create new market template / Default market

We want to have a template project which can be used as a baseline for creating a new market.

This template market should also serve as a default implementation if possible.

Default Queue

  • Copy Italian market fiskaltrust.Middleware.Localization.QueueIT to fiskaltrust.Middleware.Localization.QueueDEFAULT
  • Add the project to the solution and the fiskaltrust.Middleware.Queue project
  • Rename all references of IT to Default in filenames and code
  • Remove all logic which is specific to the italian market
  • Remove all calls to the IITSSCD
  • Add the default queue to the LocalizedQueueBootStrapperFactory
  • Restrict to sandbox

New Market Template

  • Add documentation to README on how to use the default queue as a new market template
  • Add comments to default queue code explaining what it does and what to change
  • Add dummy code which can be expanded on in a new market (I'm not sure yet what that would be)
  • Document what to add to the fiskaltrust.storage and the fiskaltrust.interface

Daily closing is only half-way processed when SCU is not reachable

Describe the bug

Right now, when doing a daily closing receipt in Germany and the SCU is not reachable during that time (or fails, e.g. due to timeouts, etc.) we do return a "failed receipt", but don't actually process the daily closing. This leads to some follow-up-issues, e.g.:

  1. The daily closings don't show up in the receipt journal in the Portal (minor problem)
  2. Automatic export ranges - like MeinFiskal and PosArchiv - are calculated based on these action journals. So when this issue happens, the daily closings will still be uploaded and trigger the export in Helipad, but the export range will be wrong (major problem)

To Reproduce

  1. Start a MW, ensure everything is working
  2. Unplug the TSE
  3. Send a daily closing receipt

Further technical details & context

Daily closing receipt: https://github.com/fiskaltrust/middleware/blob/main/queue/src/fiskaltrust.Middleware.Localization.QueueDE/RequestCommands/DailyClosingReceiptCommand.cs

Calculation in Helipad (based on actionjournal):
https://dev.azure.com/fiskaltrust/fiskaltrust/_git/fiskaltrust.space.helipad?path=/src/fiskaltrust.space.helipad.Domain/Services/TableStorage/TableStorageProvider.cs&version=GBmain&line=104&lineEnd=105&lineStartColumn=1&lineEndColumn=1&lineStyle=plain&_a=contents

Update FCC to 4.1.1 and release SwissbitCloud and DF SCUs

Problem

The DeutscheFiskal has released an update to their FCC that is mandatory to install until end of July because of a certificate update.

Solution

We need to update our reference to the FCC to v4.1.1, and release the two affected SCUs as version 1.3.56.

Tasks

  • Update FCC version to 4.1.1 in SwissbitCloud and DeutscheFiskal SCUs
  • Upload the zip files to the download storage
  • Test on sandbox
    • New installation
    • Update from previous version
  • Release
    • DeutscheFiskal SCU v1.3.56
    • SwissbitCloud SCU v1.3.56
    • Write release notes
  • Inform @saschaknapp and @marcoleidl about the release, so that we can inform our customers that this update is mandatory

IT Middleware doesn't allow cbUser field to be a string

While we allow user defined string values for the cbUser field in other markets we are restricting this to being a number from 1-12. This is a somehow breaking change for most existing POSCreators since the do very often include the real users name.

For fixing this we will have to do two things

  • Remove the restriction from the current implementation and allow user defined values
  • Figure out a internal mapping to the Operator that is being used on the RT Printer / RT Server

German Middleware behaves differently when switching to late-signing mode

Describe the bug

When using the late signing mode (via the ftReceiptCaseFlag 0x0000000000010000), the Middleware behaves slightly different in Germany than it does in France or Austria.

  • In Austria and France, the Middleware always returns the ftState 0x...8 when it's operating in late signing mode - no matter if the following receipts have the ftReceiptCaseFlag 0x0000000000010000 (before the zero receipt is sent)
  • In Germany, this ftState is only returned when the mentioned flag is included into the receipts

We should IMO change the German behavior to match the Austrian and French one, because the Middleware in fact obviously stays in late signing mode.

To Reproduce

  1. Send a POS receipt to the Middleware where the ftReceiptCase has the flag 0x0000000000010000
  2. The response will have the expected ftState 0x...8
  3. Send another POS receipt without the flag 0x0000000000010000
    1. In Austria and France, the response will have the ftState 0x...8
    2. In Germany, it will have the state 0x...00

I'm a bit worried though because - depending on the implementation - this could be a breaking change, although IMO that's very unlikely. It may also make sense to fix this with the tagging v2 🤔

Introduce QueueEU

Is your feature request related to a problem? Please describe.

For countries within European Union, using EUR currency, where no middleware is provided, I would like to use basic fiskaltrust security mechanism.

Describe the solution you'd like

A Queue implementation which processes hash-chaining and which creates a ReceiptJournal and does not require a SCU.

Additional context

Rename QueueDEFAULT to QueueEU and distribute it in all available countries. For future implementations use the basic fiskaltrust securitiy mechanism from QueueEU.

Migrate Austrian and French Middleware to new platform

To unify the Middleware experience and support the latest platform features (Android, BYODC, Launcher 2.0, ...), we need to migrate the French and Austrian Queues and SCUs to the new platform hosted in this repository.

This is currently a WIP - we use this issue to track the state:

Austria

  • Queue / Signing (incl. receipt handling)
    • Basic receipt handling
    • Signing decisions
    • Error handling/special cases
  • Queue / Journals
  • SCUs
  • Tests

France

  • Queue / Signing (incl. receipt handling)
  • Queue /Journals
  • Tests

Receipt queue database not refreshed

We got issues with queue for both active locations: FR0147 and FR4667.
FR4667 latest data are from 07/09/2023
FR0147 latest data are from 28/09/2023 but the reason for shorter delay is that I requested manual push for the data.
FR4667
FR0147

Add test for sqlite nuspec

Background

When building the nuget package for sqlite the .nuspec file needs to configured correctly.

When adding a new package and it is forgotten to update the .nuspec file the released package can not be used in a PackageReference which is needed for the Android Launcher and SignatureCloud.DE.

Solution

Create a smoketest thats running in CI which packs and publishes the SQLite queue to a local directory nuget repository and then tries to use this package from the repository.

Tasks

  • Reproduce locally in android launcher repository (using SQLite version 1.3.53)
  • Reproduce in android launcher with a local nuget repository where we publish an intentionally broken package to (break the package by e.g. removing the QueueDEFAULT from the queue/src/fiskaltrust.Middleware.Queue.SQLite/.nuspec)
  • Reproduce in new test project in the middleware repository
  • Add to test to fiskaltrust.Middleware.Queue pipeline that fails if the package is broken
    • Add a new project that resembles the Android launcher project
    • Create a local feed on the pipeline that contains Queue.Sqlite package
    • Add pipeline tasks to build a new project
    • Force the project to use the Queue.Sqlite package from local feed

Add Azure Table Storage based queue

To better support cloud hosted scenarios we should add an additional implementation for an Azure Table Storage based queue.

Naming

  • Queue: fiskaltrust.Middleware.Queue.AzureTableStorage

Configuration

{
   "StorageAccountName": "example"
}

Azure Table Storage handling

While Azure Table Storage scales very well and is also a low cost option for storage it has some limitations that we will have to take into consideration when basing our Queue on it. While we will potentially have multiple Table Storages that can be used, each Queue will be connected to a single Table Storage instance. Similar to the SQL Server based implementation (e.g.) each Queue will have dedicated Tables (e.g. x03d61fe261d44beeac2a78c95533ecf7QueueItem).

Partitioning

Since the only form of indexing that Azure Table Storage supports is partitioning (Partitionkey and Rowkey) we will have to be careful with selecting the right design for the tables. For choosing the right pair of keys we have taken a look at the access patterns of the current Middleware implementation. Since most access is write based and Azure Table Storage performs very well on writes we don't have to make any specific decisions for that.

When reading entities we can differentiate between three groups of entities

  • Configuration entitites (queue, cashbox, ...)
  • Data entities (queueitems, receiptjounrals..)
  • Supporting entities (failedtransactions, opentransactions..)

Configuration entities

These entities are often read but only writen very infrequently. In most of the cases there is also only one or a very small amount of entries which are mainly accessed by the Id. For this reason we can use the main id of the specific entity as partition & rowkey

  • CashBox
    • ParitionKey: ftCashBoxId
    • RowKey: ftCashBoxId
  • QueueAT
    • ParitionKey: ftQueueATId
    • RowKey: ftQueueATId
  • QueueDE
    • ParitionKey: ftQueueDEId
    • RowKey: ftQueueDEId
  • QueueFR
    • ParitionKey: ftQueueFRId
    • RowKey: ftQueueFRId
  • QueueME
    • ParitionKey: ftQueueMEId
    • RowKey: ftQueueMEId
  • Queue
    • ParitionKey: ftQueueId
    • RowKey: ftQueueId
  • SignaturCreationUnitAT
    • ParitionKey: ftSignaturCreationUnitATId
    • RowKey: ftSignaturCreationUnitATId
  • SignaturCreationUnitDE
    • ParitionKey: ftSignaturCreationUnitDEId
    • RowKey: ftSignaturCreationUnitDEId
  • SignaturCreationUnitFR
    • ParitionKey: ftSignaturCreationUnitFRId
    • RowKey: ftSignaturCreationUnitFRId
  • SignaturCreationUnitME
    • ParitionKey: ftSignaturCreationUnitMEId
    • RowKey: ftSignaturCreationUnitMEId

Master-Data entities

This category of entities is only used during specific operations and also only written infrequently. The amount of entities is also very small and can be easily cached. Therefore the scaling targets for this category is also based on point reads.

  • AccountMasterData
    • ParitionKey: AccountId
    • RowKey: AccountId
  • OutletMasterData
    • ParitionKey: OutletId
    • RowKey: OutletId
  • AgencyMasterData
    • ParitionKey: AgencyId
    • RowKey: AgencyId
  • PosSystemMasterData
    • ParitionKey: PosSystemId
    • RowKey: PosSystemId

Data entities

For the data entities we are expecting high write rates, but we will also offer read capabilities via the Journal endpoints that will require us to store them in partitions that allow larger reads.

  • ftQueueItem
    • ParitionKey: Timestamp
    • RowKey: ftQueueItemId
  • ftReceiptJournal
    • ParitionKey: Timestamp
    • RowKey: ftReceiptJournalId
  • ftActionJournal
    • ParitionKey: Timestamp
    • RowKey: ftActionJournalId
  • ftJournalAT
    • ParitionKey: Timestamp
    • RowKey: ftJournalATId
  • ftJournalDE
    • ParitionKey: Timestamp
    • RowKey: ftJournalDEId
  • ftJournalME
    • ParitionKey: Timestamp
    • RowKey: ftJournalMEId
  • ftJournalFR
    • ParitionKey: Timestamp
    • RowKey: ftJournalFRId

Supporting entities

These entities are written more frequent than config entities, but they are cleaned up if they are not needed anymore. Therefore the scaling targets of these entities are more about point reads.

  • OpenTransaction
    • ParitionKey: cbReceiptReference
    • RowKey: cbReceiptReference
  • FailedFinishTransaction
    • ParitionKey: cbReceiptReference
    • RowKey: cbReceiptReference
  • FailedStartedTransaction
    • ParitionKey: cbReceiptReference
    • RowKey: cbReceiptReference

For some scenarios we allow to read by ftQueueRow. Since we are not clear yet about how this will look like we will implement this initial draft without additional lookup-tables, but for future scenarios we would add additional lookup-tables to allow reading by those entities.

e.g.

  • ftQueueRowTimeStampQueueItem
    • PartitionKey: ftQueueRow
    • RowKey: Timestamp

Searching by QueueRow we would be able to get the Timestamp and perform another call to the original table to lookup entities efficiently.

Limitations

As described in Scale Targets for table storage there are some limitations that we will have to take care of when it comes to working with Middleware entities in Azure Table Storage.

Size of entities

Currently a single entity (row) can only store up to 1 MiB of data and only 64 KiB per String property. This won't be an issue for most of our entities since they have a (nearly) fixed size. The only entities that can theoretically grow very large are ftQueueItem and ftJournalDE. While ftQueueItem is very unlikely to hit this scaling target of 1 MB we will potentially hit it for the 64 KiB limit. The 1 MB limit will probably be hit by ftJournalDE entries. For this reason we will have to take care of both these cases:

Split columns

For properties (string) that grow larger than 64 KiB in size we will have to split up the column. This will only be necessary for ftQueueItem.request and ftQueueItem.response. Both of these properties should be able to write / read multiple columns to split strings that surpass this limit into multiple calls (up to a limit of 255).

Store blob into blobstorage

This option will require two calls per entity since we will store the larger blobs into Blob Storage, while still keeping the other properties in Table Storage. This has the benefit of a simple form of storing the entity in Table Storage, but the drawback of more complex calls since we have to perform an additional call per entity. This could be problematic in cases where we have lots of entities like ftQueueItem or ftReceiptJournal since we can read 1000 entities in one call from Table Storage, but only one entity per call from Blob Storage. Because of this limitation we'll only use this option for saving the JournalDE TAR Files. Since we have only got one entry per Daily Closing we will not have issues with the limitation mentioned above because we will very unlikely have to fetch more than a few entries.

Security

Azure Table Storage offers several capabilities to allow connections. While we have often relied on the connectionstring in the past this can have potential security implications for hosted scenarios. For this reason we will (for the time being) only offer the credential based authentication. This allows us to either use configuration parameters, environment parameters, or use a OpenIDConnect based authentication mechanism that allows us to get rid of secrets.

By leveraging the Azure.Identity SDK we can easily connect to Table Storage by passing a DefaultAzureCredential that will do the heavy lifting for us.

var client = new TableClient(new Uri("https://example.table.core.windows.net/"), new DefaultAzureCredential());

While we already have an implementation for Azure Table Storage we will still have to do some adaptions to make it production ready.

  • Upgrade package Microsoft.Azure.Cosmos.Table to latest Azure.Data.Tables
  • Choose the right place for initialising data tables
  • Leverage identity based authentication (https://learn.microsoft.com/en-us/dotnet/api/overview/azure/identity-readme?view=azure-dotnet) to get rid of connectionstrings
  • Define configuration parameters
  • Change QueueItemRepository to use Timestamp as Partitionkey
  • Store ftJournalDE blobs into blob storage
  • Take care of splitting request / response properties

DE: cbReceiptReference exception thrown for wrong receiptCase

Description:
When a single open transaction in the TSE should be closed then the "Failed-Transaction-Receipt" can be used. In such cases the cbReceiptReference has to be set or the following exception is thrown:
CbReceiptReference must be set for one transaction! If you want to close multiple transactions, pass an array value for 'CurrentStartedTransactionNumbers' via ftReceiptCaseData

Issue:
As this exception is only relevant for the "Fail-transaction-Receipt" it should only be thrown in the described case. But currently the middleware throws this exception even when the "Fail-transaction-receipt" isn't used. Example receipt request:

<ftReceiptCase type="number">4919338172267167745</ftReceiptCase>
  <ftReceiptCaseData type="string">KAZ060001185#03.05.22#07:41:05</ftReceiptCaseData>
  <ftCashboxID type="string">XXXXX</ftCashboxID>
  <cbTerminalID type="string">XXXXX</cbTerminalID>
  <cbUser type="string">{"ID":"XXXXXX","Name":""}</cbUser>
  <cbCustomer type="string">{"CustomerId":"","CustomerName":"","CustomerName2":"","CustomerStreet":"","CustomerStreet2":"","CustomerZip":"","CustomerCity":"","CustomerCountry":"","CustomerVATId":""}</cbCustomer>
  <cbReceiptReference type="string"></cbReceiptReference>
  <cbPreviousReceiptReference type="string">KAZ060001185</cbPreviousReceiptReference>
  <cbReceiptMoment type="string">2022-05-03T05:41:05Z</cbReceiptMoment>
  <cbChargeItems type="array">
    <item type="object">
      <ProductNumber type="string"></ProductNumber>
      <Description type="string">Vorgangsbeginn</Description>
      <a:item item="Description 2" type="string" xmlns:a="item"></a:item>
      <a:item item="Description 3" type="string" xmlns:a="item"></a:item>
      <Quantity type="number">0</Quantity>
      <UnitPrice type="number">0</UnitPrice>
      <Amount type="number">0</Amount>
      <VATRate type="number">0</VATRate>
      <VATAmount type="number">0</VATAmount>
      <ftChargeItemCase type="string">4919338167972134912</ftChargeItemCase>
      <ftChargeItemCaseData type="string">{"Reason Code":"Cashbox Verkauf Prozeß"}</ftChargeItemCaseData>
      <Moment type="string">2022-05-03T05:41:05Z</Moment>
    </item>
    <item type="object">
      <ProductNumber type="string"></ProductNumber>
      <Description type="string">Buche Wechselgeld aus</Description>
      <a:item item="Description 2" type="string" xmlns:a="item"></a:item>
      <a:item item="Description 3" type="string" xmlns:a="item"></a:item>
      <Quantity type="number">-1</Quantity>
      <UnitPrice type="number">-1054.32</UnitPrice>
      <Amount type="number">-1054.32</Amount>
      <VATRate type="number">0</VATRate>
      <VATAmount type="number">0</VATAmount>
      <ftChargeItemCase type="string">4919338167972135059</ftChargeItemCase>
      <ftChargeItemCaseData type="string">{Schließe Cashbox}</ftChargeItemCaseData>
      <Moment type="string">2022-05-03T05:41:05Z</Moment>
    </item>
  </cbChargeItems>
  <cbPayItems type="array">
    <item type="object">
      <Quantity type="number">-1</Quantity>
      <Amount type="number">-1054.32</Amount>
      <Description type="string">Buche Wechselgeld aus</Description>
      <ftPayItemCase type="string">4919338167972135000</ftPayItemCase>
      <ftPayItemCaseData type="string">{Schließe Cashbox}</ftPayItemCaseData>
      <Moment type="string">2022-05-03T05:41:05Z</Moment>
    </item>
  </cbPayItems>  

This is a regular receiptRequest with a "Failed receipt" flag and therefore shouldn't result in this exception. Possible cause: The condition that checks if the receipt is a "Fail-transaction-receipt" is negated and that might result in all receipts being checked except the relevant "Fail-transaction-receipt" if (string.IsNullOrEmpty(request.cbReceiptReference) && !request.IsFailTransactionReceipt() && !string.IsNullOrEmpty(request.ftReceiptCaseData) && !request.ftReceiptCaseData.Contains("CurrentStartedTransactionNumbers"))

if (string.IsNullOrEmpty(request.cbReceiptReference) && !request.IsFailTransactionReceipt() && !string.IsNullOrEmpty(request.ftReceiptCaseData) && !request.ftReceiptCaseData.Contains("CurrentStartedTransactionNumbers"))

DE HelipadHelper 1.3.47 not uploading JournalDE

The current version of the HelipadHelper 1.3.47 doesn't seem to upload the JournalDE table after triggering a daily-closing.

Upload behavior with 1.3.47 after daily-closing
middleware-log
WCF/REST Request | Method name: Journal | Content-Type: application/json; charset=utf-8 | Request url: http://localhost:1500/json/v0/journal?type=255&from=0&to=0 2023-08-24 08:30:26.661 +02:00 [DBG] WCF/REST Response | Method name: Journal | Request url: http://localhost:1500/json/v0/journal?type=255&from=0&to=0 2023-08-24 08:30:27.638 +02:00 [DBG] WCF/REST Request | Method name: Journal | Content-Type: application/json; charset=utf-8 | Request url: http://localhost:1500/json/v0/journal?type=1&from=638283043819029359&to=-1000 2023-08-24 08:30:27.703 +02:00 [DBG] WCF/REST Response | Method name: Journal | Request url: http://local host:1500/json/v0/journal?type=1&from=638283043819029359&to=-1000 2023-08-24 08:30:27.951 +02:00 [DBG] WCF/REST Request | Method name: Journal | Content-Type: application/json; charset=utf-8 | Request url: http://localhost:1500/json/v0/journal?type=2&from=638283043818689227&to=-1000 2023-08-24 08:30:27.991 +02:00 [DBG] WCF/REST Response | Method name: Journal | Request url: http://localhost:1500/json/v0/journal?type=2&from=638283043818689227&to=-1000 2023-08-24 08:30:28.310 +02:00 [DBG] WCF/REST Request | Method name: Journal | Content-Type: application/json; charset=utf-8 | Request url: http://localhost:1500/json/v0/journal?type=3&from=638283043818363985&to=-1000 2023-08-24 08:30:28.342 +02:00 [DBG] WCF/REST Response | Method name: Journal | Request url: http://localhost:1500/json/v0/journal?type=3&from=638283043818363985&to=-1000 2023-08-24 08:30:28.878 +02:00 [DBG] WCF/REST Request | Method name: Journal | Content-Type: application/json; charset=utf-8 | Request url: http://localhost:1500/json/v0/journal?type=341&from=638283043812699281&to=-1
no JournalType 17477 & JournalDE pointer in the portal hasn't been updated

Upload behavior with 1.3.41 after daily-closing
middleware log
2023-08-24 08:51:32.369 +02:00 [DBG] WCF/REST Request | Method name: Journal | Content-Type: application/json; charset=utf-8 | Request url: http://localhost:1500/json/v0/journal?type=255&from=0&to=0 2023-08-24 08:51:32.474 +02:00 [DBG] WCF/REST Response | Method name: Journal | Request url: http://localhost:1500/json/v0/journal?type=255&from=0&to=0 2023-08-24 08:51:33.279 +02:00 [DBG] WCF/REST Request | Method name: Journal | Content-Type: application/json; charset=utf-8 | Request url: http://localhost:1500/json/v0/journal?type=1&from=638284558253682738&to=-1000 2023-08-24 08:51:33.339 +02:00 [DBG] WCF/REST Response | Method name: Journal | Request url: http://localhost:1500/json/v0/journal?type=1&from=638284558253682738&to=-1000 2023-08-24 08:51:33.584 +02:00 [DBG] WCF/REST Request | Method name: Journal | Content-Type: application/json; charset=utf-8 | Request url: http://localhost:1500/json/v0/journal?type=2&from=638284558253409678&to=-1000 2023-08-24 08:51:33.601 +02:00 [DBG] WCF/REST Response | Method name: Journal | Request url: http://localhost:1500/json/v0/journal?type=2&from=638284558253409678&to=-1000 2023-08-24 08:51:33.826 +02:00 [DBG] WCF/REST Request | Method name: Journal | Content-Type: application/json; charset=utf-8 | Request url: http://localhost:1500/json/v0/journal?type=3&from=638284558253165113&to=-1000 2023-08-24 08:51:33.839 +02:00 [DBG] WCF/REST Response | Method name: Journal | Request url: http://localhost:1500/json/v0/journal?type=3&from=638284558253165113&to=-1000 2023-08-24 08:51:34.260 +02:00 [DBG] WCF/REST Request | Method name: Journal | Content-Type: application/json; charset=utf-8 | Request url: http://localhost:1500/json/v0/journal?type=16724&from=1&to=-1000 2023-08-24 08:51:34.269 +02:00 [DBG] WCF/REST Response | Method name: Journal | Request url: http://localhost:1500/json/v0/journal?type=16724&from=1&to=-1000 2023-08-24 08:51:34.420 +02:00 [DBG] WCF/REST Request | Method name: Journal | Content-Type: application/json; charset=utf-8 | Request url: http://localhost:1500/json/v0/journal?type=17477&from=638283043812699281&to=-1 2023-08-24 08:51:34.431 +02:00 [DBG] WCF/REST Response | Method name: Journal | Request url: http://localhost:1500/json/v0/journal?type=17477&from=638283043812699281&to=-1 2023-08-24 08:51:34.606 +02:00 [DBG] WCF/REST Request | Method name: Journal | Content-Type: application/json; charset=utf-8 | Request url: http://localhost:1500/json/v0/journal?type=18002&from=1&to=-1000

JournalType 17477 included & JournalDE pointer in the portal has been updated

In both cases the TAR-export beforehand was successful.

Log is attached :)
middleware-log-skn.log

AT - Rounding differences between Turnover counter and QR Code

Currently we do round turnover and qrcode numbers differently. While we are rounding the summed up values for the turnover:

but we are rounding each value for the qrcode.

sb2.AppendFormat(System.Globalization.CultureInfo.CreateSpecificCulture("de-AT"), "{0:0.00}_{1:0.00}_{2:0.00}_{3:0.00}_{4:0.00}_", betrag_Normal, betrag_Erm1, betrag_Erm2, betrag_Null, betrag_Besonders);

While this usually doesn't make a huge difference there are some cases that lead to diverging turnovercounters:

Betrag_normal: 41,305 
Betrag_Null: 0,205 

Turnover = 41,305 + 0,205 = 41,51 => Round => 41,51

QRCode = 41.31_0.00_0.00_0.21_0.00_ => Total Sum 41,52

In addition to that string.Format and Math.Round do behave differently

The main difference between the two methods lies in their rounding mechanisms. Here are the details:

  • string.Format("{0:F2}", 0.205): This function will round the number 0.205 to 2 decimal places, resulting in 0.21. The reason behind this is that the string.Format method uses banker's rounding (or round half to even), which is a type of rounding where the number rounded to the nearest even number if it lies exactly halfway between two numbers. In this case, 0.205 is exactly between 0.20 and 0.21, so it rounds up to 0.21 since the number after the second decimal place is 5 or more.

  • Math.Round(0.205, 2): This function rounds to the nearest number as well, but it uses what is called "away from zero" rounding. However, due to limitations in floating point precision, 0.205 might be represented internally as slightly less than 0.205 (something like 0.20499999999999996). So when you call Math.Round(0.205, 2), it may actually round down to 0.20 because the internal representation is slightly less than 0.205.

These methods can produce different results due to the different rounding strategies they use and the limitations of floating point precision. It's always important to consider these factors when deciding which method to use for rounding numbers in your C# programs.

Cleanup temporary file on error

In #284 we introduced a FileStream using a temporary file.

This file is deleted after the operation is completed here.
If an exception is thrown in the method the file will not be deleted.
Since we yield we can not put this in a finally block.

We need to find a way to cleanup this file if an error happens.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.